As hard as it is for me to believe, I first started using computers in the late ’70s and early ’80s. Like most technologists my age, I’ve seen some cyclical trends (e.g., the back-and-forth from server-heavy computing to client-heavy computing) but, more importantly, I’ve witnessed some long-term trends over the past few decades that seem to persist and perhaps accelerate today.
1. From Code-Heavy to Data-Heavy
In the eighties, just as today, computers certainly processed data. The code that defines that processing has steadily grown; in fact, many programmers revel in how it was possible to write entire programs in mere kilobytes back then when it takes thousands of times more memory to write programs today (primarily due to the extensive libraries that are part of even the most basic of programs). However, despite the growth in code size, the volume of data has grown at an exponentially-faster pace. Code continues to grow but data continues to grow even faster. Why? That’s covered below but the term “big data” has been used to capture this notion.
What Does This Mean For Us?
What it means is that the data and its processing must be increasingly more co-dependent and thus co-located. That is, while it made sense in the ’90s to bring the data to whatever machine the user was using (aka that “client-server model“), nowadays, it may make more sense to send the code to the much larger data so the code is operating directly on (or very close to) where the data is stored. All of this will continue to push end-user devices (laptops, phones, etc.) into becoming increasingly more user-interface-only-focused leaving storage and processing to happen elsewhere.
2. From Single Big Iron to Multiple Commodity
In the ’60s and ’70s, nearly all computing happened on “big iron” machines: mainframes and smaller, “mini” computers from a few, specialized vendors such as IBM and Digital Equiment Corporation. Businesses would have one or a few of these machines that would handle their entire business operations. The ’80s and ’90s saw the introduction of mid-range, “workstation” and “server-class” machines that could do a lot of heavy lifting, but typically required an order-of-magnitude more machines. These machines were much less specialized than their predecessors with options from Sun, HP, IBM, and many other vendors. By the ’00s, most computing now happens on relatively-generic “commodity” computers that aren’t much different architecturally than the machines we individually use (although they do fit in racks and don’t have keyboards and monitors). It is true that some scientific and research institutions still rely on now seemingly-archaic “supercomputers” that remind us of the mainframe world; however, increasingly, as they replace these centralized machines, they’re bringing in clusters of commodity computers that, in aggregate, have much more power than the supercomputers they’re replacing.
What Does This Mean For Us?
This means that we’re going to continue seeing the streamlining of the computer industry with specialized computers remaining for only the most niche of industries and applications. This will continue to drive costs down per unit of processing and storage as hardware becomes ever more commoditized. As processing and storage continually become commoditized, the value in the industry seems to derive primarily from interface technology: components that interact with humans (increasingly higher-end displays, no-touch visual input technologies, and so on).
3. From Structured, Enterprise-Generated Data to Unstructured, User-Generated Data
We’ve all been hearing a lot about this one: increasingly, more and more of the data that enterprises store and manipulate aren’t generated by their own systems but, rather, by customers and users of those systems. For that reason, the data is becoming increasingly unstructured and multi-media as opposed to tabular and character-based.
What Does This Mean For Us?
The implication from all of this is that business intelligence will not only continue to be an increasingly more crucial aspect of enterprises as they try to harness the information in the data they’ve been harvesting, the field of data warehousing and analysis in general will continue to experience revolutionary changes in tools and technologies needed to handle the increasingly broader and deeper data sets. While this doesn’t mean the death of traditional, structured relational databases, I do think it means they may be relegated to a secondary position supporting next-generation data repositories able to deal with this new data paradigm. Technologies such as Hadoop and NoSQL seem to have gained a lot of traction here, but other technologies will certainly begin evolving over the coming few years. Further, because much of this data is about “us” (the users), privacy concerns and legislation will likely preoccupy us for quite some time.
4. From Big & Stationary to Small & Integrated
Aside from going to commoditized machines on the enterprise (point #2 above), the computers that we deal with are becoming not just smaller, but more integrated into our lives and into the particular function we need. We’ve gone from mainframes to desktops to laptops to phones to what is now becoming the “wearable computing” movement (with devices such as Google Glass and smart watches).
What Does This Mean For Us?
Over the coming decade, computers will begin to not just enter our wardrobe through wearable computing, but will become increasingly integrated into our home appliances (e.g., refrigerators and ovens that can communicate with us about the fact the milk is low or the roast being ready), our cars, our televisions, and every other electronic tool. These applications are all being made possible by superlative progress in the areas of wireless and cellular communication.
Appreciate thhis blog post