RealTimeWeekly | The Future of Data Computing
single,single-post,postid-17104,single-format-standard,ajax_fade,page_not_loaded,,qode-theme-ver-5.9,wpb-js-composer js-comp-ver-4.3.4,vc_responsive

The Future of Data Computing


18 Jan The Future of Data Computing

While today’s Data Stream Networks have mastered the task of reliably routing data to and from data points and devices, there still isn’t a technically simple method in use to change this data mid-stream. When it comes to data changes, calculations, and computations, standard practices are that these changes happen in a server. Data from remote points is pulled into a server, the desired calculations or changes are made, and the data is pushed back into the data stream and ultimately to its end point. While this practice works, it begins to lose efficiency and functionality as the number of data points and computations grow, and as the app scales larger in size.

Josh Marinacci from PubNub is foreseeing a future of smarter networks where at least some data calculations and computations can occur in the data stream itself without requiring the data to be routed to a server for computation. This “stream oriented computing” opens the doors for largely scaled apps with computations that will remain simple to execute even as the network of app users gets more and more large and complex. According to Josh in his article on stream oriented computing, “…any code that is conceptually simple but hard to scale to millions of users is a prime candidate for moving into a smarter network.”

Josh gives an interesting example of a chat app designed specifically to allow junior high school students from all over the world to talk to one another. Not too complex, right? Now, say a feature was added to the app to filter out profanity by either changing the profane word to a different word, or by stopping the message and sending a notice to the student that the message was not sent due to the profanity. With this filtering in place, each and every message would have to be pulled down into a server, processed with the code to check for profanity, and then sent back out into the data stream and to its original destination, either changed or unchanged. While this is still a simple concept, it becomes hard to scale to the millions of students sending millions of messages all over the world. This message filtering issue could be solved simply by having some computation of data put right into the data stream itself and skipping the server.

By putting computation directly into the data stream, new possibilities of convenient data processing arise. While not every type of computation will be appropriate to move to the data stream, and some still will surely remain on servers, there are definitely plenty of computations that could benefit from being done through stream oriented computing.

Having computation happen off the servers and in the data stream will help scale the next generation of apps in a way that allows for maximum reach across endless numbers of users, and in a way that requires minimal server processing and allows for maximum flexible creativity with our data. For more examples of how this may change the future of computing, check out Josh’s article on Embedded Computing.

No Comments

Sorry, the comment form is closed at this time.