When it comes to storing data, it can feel like we’re running out of numbers. If you are old enough, you may remember when floppy disks were measured in kilobytes in the 1980s. If you’re a little younger, you’re probably more familiar with gigabytes of hard drives or terabytes of hard drives today.

The amazing computational footprint of humanity
But we are now producing data at an unprecedented rate. As a result, we will need to understand numbers so large that they seem inaccessible to human understanding. To see what new space we’re entering, consider the following: Market research firm IDC estimates that by 2020, global data generation and consumption totaled 59 zettabytes – that’s 59 trillion gigabytes of old money.

However, while the total volume of data is now on an almost unimaginable scale, the rate of growth is all the more surprising. In 2012, IBM estimated that 90% of the world’s data was generated in the past two years. Since then, the explosive growth in global data has continued, and this trend appears to be continuing. In fact, IDC predicts that humanity will create more data over the next three years than it has in the previous three decades.

The obvious question is: What has changed? Why are we suddenly generating so much more data than ever before? Smartphones are part of history, of course. Everyone now actually carries a laptop in their pockets, which far exceeds the capabilities of previous generations of desktop computers. These devices are constantly connected to the Internet and constantly receive and transmit data, even when inactive. The average American generation Z adult unlocks the phone 79 times a day, about once every 13 minutes. The ever-evolving nature of these devices has fueled a flood of new data: 500 million new tweets, 4,000 terabytes of Facebook messages, and 65 billion new WhatsApp messages are sent into cyberspace every 24 hours.

Smartphones are just the tip of the iceberg
But smartphones are only the most visible manifestation of the new data reality. While you can assume that video platforms like Netflix and YouTube capture the largest share of global data, in reality, the total consumer share is around 50%, and that percentage is expected to gradually decrease in the coming years. So what constitutes the rest?

The advancement of the Internet of Things and connected devices has expanded our global computing scale. The fastest growing year over year actually falls into a category of information called embedded data and performance data. This is information collected from sensors, connected devices, and automatically generated metadata that is hidden behind the scenes, beyond the visibility of the end-users.

Take autonomous vehicles, for example, which use technologies such as cameras, sonar, lidar, radar, and Global Positioning System (GPS) to monitor the vehicle’s environment, route planning, and prevent hazards. Intel has calculated that the average self-driving car that uses modern technology will generate four terabytes of data per day. In comparison, one vehicle would produce roughly 3,000 people per day. Additionally, it will be important to keep this data secure.

On the one hand, it will be helpful to plan maintenance intervals and diagnose technical problems in the most effective way. It can also be used as part of a decentralized system to coordinate traffic flow and reduce energy consumption in a specific city. Finally, and perhaps most important in the short term, it will be important to resolve legal disputes in the event of injury or accidents.

Autonomous cars are only a small part of the story. According to McKinsey & Company, the share of companies using IoT technology increased from 13% to 25% between 2014 and 2019, and the total number of devices is expected to reach 43 billion by 2023. The future The economy will witness a massive increase in the number of connected devices that are It produces data potentially hypersensitive or even sensitive.

Is Moore’s Law near the end?
There are two factors to consider, both of which indicate the increasing benefit of decentralized networks. First, while we have more data than ever before to address global challenges such as climate change, economic instability and the spread of airborne viruses like COVID-19, we may be approaching a strict technical limit. In terms of the amount of this information that can be processed by the mainframe computers in real time. Although data volumes have grown significantly in recent years, computing power has not increased.

Source: CoinTelegraph

LEAVE A REPLY