The volume of data in today’s world seems to be immense, big enough so that we can talk about “Big Data,” but this may well turn out to be only the beginning. There’s every reason to believe that, when it comes to data, we’re actually living in an antediluvian moment. We think we’re dealing with volumes of data that deserve to be called “big,” but the flood is yet to come.
That’s not to say there hasn’t already been a sea change in the volume of data that’s out there. It’s big indeed when it’s compared to data volumes of even the recent past. Big Data makes news on account of scale alone, and there’s enough information to manage that we’re constantly chasing new ways to find order in the chaos and to extract meaning from data streams that don’t easily reveal their secrets. Those jobs are hard enough right now.
But it’s only the beginning. The data we have so far may end up looking like a mere trickle when we consider the Nile-sized river that’s coming our way in the very near future.
The coming flood isn’t going to be produced by the usual suspects. It won’t necessarily come from laptops, tablets and phones. They’ll all play a part, and phones will play a big one, but the universe of objects that generate data is rapidly expanding, and it includes everything from cars to houses to wearable devices. Wherever there’s something that’s “smart,” there’s data, and those new producers of data belong to a phenomenon that’s joined “Big Data” as a newsworthy buzzword. We’ve met the “Internet of Things.”
The data we have so far may end up looking like a mere trickle.
Despite its status as a buzzword for 2016, the IoT has been with us for a surprisingly long time. We saw its faint stirrings over 40 years ago, in 1974, when the first ATM was powered up and plugged into a network.
That network wasn’t connected on a worldwide level, but the Internet did join forces with a Thing in 1982 at Carnegie Mellon University, so long ago that the Internet was still the ARPANET. In that case, the Thing was a Coke machine that could be queried for the presence of bottles and their status: Had they been there long enough to be cold? Thirsty programmers from all over the world could check in, even if the exercise was purely academic for anyone not in the machine’s immediate vicinity.
From that first machine, it took a while for things to get up to speed, but the deluge that followed was more than just Coke, and the IoT now has plenty of momentum. By 2008, there were more objects in the world than people. In 2015, a year in which the market for wearable devices grew 223 percent, more than 4.9 billion “things” inhabited the planet. By 2020, more than a quarter million vehicles will be connected to the Internet.
And those numbers are specially selected from the more conservative estimations. There’s no shortage of more dramatic predictions.
One big problem we face is the need to depart from the traditional approach in order to cope with data that’s arriving in quantities orders of magnitude greater than what we’re used to. Up to a point, we could cope by adding raw computing power to the mix. Collect your data, run your reasonably smart software on as many servers as you could find, and wait for your questions to be answered or your problem to be solved. If the magnitude of data increased, throw more servers at it.
At some point, though, server-throwing won’t be enough. Reality intrudes. We’re entering a world in which an object that’s not connected to the Internet will be the exception rather than the rule, and the sheer scale of information flowing in every direction will overwhelm the machines. Raw computing power has its limits. Timely analysis won’t be possible unless timeliness is measured on a scale of weeks. No organization can live with that.
We have been successful in speeding up data analysis for our clients.