¶The world has moved to a data-centric paradigm, the era of “Big Data,” in which hundreds of millions of computers and mobile devices are continuously creating staggering amounts of information about people and everything else. This can only accelerate.
¶The change is so great that the computing tools we’ve used for the past decade are no longer capable of meeting these new challenges. Instead, radically different approaches to databases, storage and other computing problems have arisen, mainly from the consumer Internet, as demonstrated by Web giants like Google, Yahoo, Facebook and Amazon, along with the start-ups in their orbits.
¶This world of Big Data presents both opportunities and threats. In addition to familiar problems like computer viruses, digital piracy and malicious attacks on servers, we can expect new problems to emerge, including manipulation and doctoring of data as well as identity falsification and impersonation. All of these will corrode the trust that has been the hallmark of the Internet.
¶Advances in software will have to address these risks. It is likely that software will become more “responsible,” able to make decisions on the fly to contain threats to the entire Web ecosystem. We can also expect smarter, content-aware network technologies to emerge to further ease these threats. Everything will increasingly happen in real time, increasing the need for robust and responsive systems for reputation management and trust. These systems will rely mainly on software algorithms, augmented by online collective human judgment.
¶The social nature of the Web will encourage a sort of instant global consensus on important issues, with little time for filtering, comparison or critical analysis. That means it will be harder to distinguish genuine public opinion from whatever the online horde happens to be saying at a given instant.
¶Indeed, every bit of data will become even more correlated with other bits of information. There will also be more data about the data — “metadata” — and analytics (what the data means) to make sense of. It will become more difficult to discern what matters to each of us individually, versus what is interesting or entertaining or trendy for the masses.
¶Indeed, the nature of the “masses” is probably an anachronism, in need of redefinition. These ever more powerful software analytic tools, trained on the massive data from the social and ubiquitous Web, is revealing who we are and what we know. This knowledge is accessible in real time to more and more people and to commercial and government organizations.
¶The leveling and commoditization of knowledge as a shared, common resource will set the stage for the creation of new, original and arbitrary knowledge. Software will play a key role as a catalyst for this next wave of intellectual value creation, one that greatly expands the pool of “everything known by everyone.” This new generation of disruptive knowledge will be built on top of the standardized knowledge platform known today as the social Web.
¶The monetization of personal data will require more sophisticated software tools, which will allow companies and individuals to trade valuable personal data in a controlled and responsible fashion. These new tools will allow us to respect the diversity of cultures, since different parts of the world will have different ideas of how this new knowledge should be put to use.
¶Today’s networks are based on hardware, and thus can be too static to support the rapidly evolving Web and its avalanche of new applications. We will see innovations to turn today’s networks into programmable infrastructure, resembling data centers.
¶Content- or context-sensitive networks will be needed, taking us far beyond today’s sometimes simplistic discussions about “net neutrality.” An example is Open Flow, a technical protocol that enables the separation of the intelligence inside a network from the network’s hardware. Open Flow is part of the Software Defined Networking initiative, in which network software plays a crucial role in making the network more programmable and responsive. What is today called the cloud will as a result evolve into interconnected clouds, or networks of clouds.
¶This interplay between computing and networking is going to increase, creating rich, unexpected and intimate fusions. Will the pattern continue?
¶There is no doubt that the massive scalability of Internet-based businesses has changed the way we think about research involving computing and networking. The urgency created by these scale effects is a result of the sheer amount of data available, which poses opportunities for research as well as monetization.
¶In the past, fewer game-changing companies existed. Their performance was often judged by what happened in their R.&D. labs, with everyone measured the same way. More importantly, all the players stayed in their own business territory. But that all changed when the world went digital. The rules that had existed for many years — “Do not come into my territory, and I will not get into yours” — simply no longer apply.
¶The changes have many implications. For one thing, important, viable research work and innovation at the core of computing and communications are being redistributed and shared with start-ups. Some of these startups come from academic projects that have been turned into companies by private investors.
¶There are many reasons to be enthusiastic about the potential of value creation in the years ahead for computing and networking. Given the challenges, we see tremendous opportunities for data scientists, computer scientists and entrepreneurs.