Menu

‘Data driven decision making in real time means ability to experiment with data. Is your traditional ETL process up for it?

0 Comment

Data Driven Decision Making in Real Time

Srihari Allamsetti
Founder CEO

Did you know that the trigger for developing business intelligence systems goes back to the early Cold War era? In his seminal article, “A Business Intelligence System” (1958), Hans Peter Luhn of IBM described business intelligence as “an automatic system…developed to disseminate information to the various sections of any industrial, scientific, or government organization.” In the post-World War II race for development, these sectors required a way to organize and simplify the rapidly growing mass of technological and scientific data.

This establishes one fact loud and clear – the way we use data for decision-making is a game changer for growth. Today, we use a lot of terminology to denote this simple truth that we discovered as early as 1950’s. The big difference is the need for data driven decision making in real time. The big challenge – gather and aggregate data from a multitude of sources in a seamless & integrated fashion; process it, contextualize it, personalize it, analyze it and bring out sharp insights on the go. This is not as daunting as it may seem. What would be daunting is to thinking of achieving it relying on traditional systems of data warehousing, ETL and business intelligence.

Have you encountered this? Production systems generate data continuously but nobody uses data in real time because they do not want to disturb production systems. When data from multiple enterprise products has to be aggregated, it is done offline. Structured and unstructured data rarely come together. Analytical tools are static and get updated periodically at best. Are we really talking about data driven decision making here?

The due shift away from SQL

For long SQL has been the staple for organizations in managing their data. It allows a broad set of questions to be asked against a single database design; is standardized, allowing users to apply their knowledge across systems and providing support for third-party add-ons and tools; is versatile and proven.

However, with so much variety in data, the real power and excitement is in playing with it – different users and analysts using it differently; making sense of it in their own different ways and for their own unique uses. It is no wonder that the early adopters of the NoSQL database technology were Google, Amazon and Facebook, who were dealing with huge variety, volume and velocity of data. Today, every progressive, customer-centric, data driven organization faces the same challenge making it imperative to use NoSQL for crucial business applications, in the place of relational database deployments to gain flexibility and scalability albeit at a lower cost.

The discernible benefits of NoSQL and NoETL

Personalization: Demand for personalization means lots of data and real time customer engagement. In a distributed database structure like NoSQL database is designed to scale elastically to meet demanding workloads and delivery the low latency in transactions.

Agility: In contrast to traditional systems, the NoSQL platform has seamlessly integrated operational and analytical databases enabling (a) extraction of information from operational data in real-time, (b) manage and feed data from multiple sources to the analytics engine, and (c) store and serve the analytics data for reporting engine.

More with less: Current day web and mobile applications support hundreds of millions of users. Instead of being limited to a single server, organizations should opt for distributed databases that can scale out across multiple servers. NoSQL allows increase in capacity by simply adding commodity servers, making it far easier and less expensive to scale. Further, in the age of IoT, NoSQL helps enterprises to scale synchronized data access connected devices and systems, store large volumes of data, and support the high performance and availability of data structures.

Risk intelligence: Intelligent, responsive and pro-active management of fraud requires several data points like detection algorithm rules, customer information, transaction information, location, and time of day – processed at scale and in a flash. The elastically scalable NoSQL databases can do this more reliably.

And the aha moment

Here comes the real deal. When you look at the advanced, future-ready data engineering solution of Data Lake – where different users can experiment with the data, ‘fail fast’, and rapidly work the analytics part – adoption of NoSQL and NoETL is a no brainer.

If you are looking for a team that’s not just adept at data engineering and analytics, but has legions of experience in creating innovative data solutions using NoSQL and NoETL as well as building cognitive Data Lakes, Give us a shout.

Leave a Reply

Your email address will not be published. Required fields are marked *