Rethinking Reporting and Analytics

It’s easy to think of business reporting and analytics as separate activities but in reality, the two have to be considered in a lock stepped approach for documenting and presenting the state of things and to help to devise some sort of view of how things will be in the future.

If you have waded through business computing technology for more than thirty years as I have, you’ll likely have encountered the term RICEFW.

In my experience, the F and W have been a more recent addition to the abbreviation. For the uninitiated, it stands for Reports (R), Interfaces (I), Conversions (C), Enhancements / Extensions (E), Forms (F) and Workflows (W).

I don’t recall encountering the term at university but it came up as part of preparation work for Y2K conversions of systems that I was involved in in the late 1990s. Yoyu’ll find it popular amongst SAP consultants and it seems it may have its roots in the Accenture consulting groups of that time.

Either way, it has stuck around as something that I will periodically revisit. The reason I mention this is that nowhere in RICE or RICEFW is there anything specifically about Business Intelligence or Analytics.

Both of these terms are industry terms that have emerged and become popularized by the industry in the last few years as the next generation of reporting so perhaps they supplant the R in RICEF.

The challenge with this positioning though, is that data analysis is something that has always been going on. Reports were rarely written for their own sake, they almost always had a specific purpose and determined objective – namely to assist business leaders and managers to pivot effort where necessary or to hold the course in terms of business plans and strategy.

The challenge is that analytics is more often than not considered these days as the visualization of information. Presenting values, statistical or financial, in self-explanatory charts and diagrams. Admittedly, it is difficult to present trends and summarize information where there is a lack of data, but the reality is that certain types of “analytics” are nothing more than what some refer to as data “modelling” – predictive data modelling.

Taking, historical performance figures and mashing them together with future plans and applying variables that are expected to lead to certain outcomes is very fashionable, the world of predictive analytics is awash with technology suitors all claiming to use the latest machine learning and artificial intelligence to help your business on this journey

The basis for most predictive modeling though, is historical numbers combined with variables that will influence expectations for the future. Data scientists will often consider the later as feature engineering.

As a consequence, the ability to both fill systems with information and in turn extract that information is more critical than ever it was.

If one comes to accept that predictive modelling and predictive analytics is the next greatest way to provide reports that can influence decision making, then this must be particularly true of businesses using systems like SAP’s ERP.

This is also one of the reasons why some data-loading products are important in being able to feed the insatiable appetite of ERP and CRM systems. Once data is in systems of record, like SAP and Salesforce. The next challenge is leveraging this data to produce meaningful reports that can help business to make decisions. Historically the mechanisms that have been used to describe these reporting tools, are ‘Decision Support Systems’.

In the mid 1970’s, the notion of being able to rely on DOS AND UNIX computing systems to produce summarized and aggregated data represented radical thinking about the use of information systems, something which today we accept as commonplace and normal.

Although a lot of the efforts today are focused on making decision support systems more effective. The reality today, is that the efficiency of decision support systems in being able to deliver meaningful and useful information remains problematic. Software vendors offer a lot of options but the fragmented nature of systems architectures means that even the most flexible of tools probably won’t cover all the scenarios you might consider

Faster reporting at a lower cost

Many technologies and businesses have been pushed to move away from running large scale historical reporting and analysis from the online transaction processing (OLTP) systems and to instead rely on some external reporting repository that is periodically refreshed with data from the OLTP system. This has led to the rise of the Data Warehouse and the popularity of databases like Redshift, BigQuery, Snowflake, Hadoop and the like.

OLAP and data mining technologies became a big focus area in the mid 1990s for data analysis to help with improving the reporting from these repositories but consequently moved the data analysis from active data analysis to passive data analysis in order to maintain transactional processing performance and yet still support the ability to run large aggregations and reports to assist the business in decision making.

This demand for efficiency led to it becoming more important for reporting engines to move away from the sequential processing of stored information, in the ways that relational database models work, to non SQL or non-relational based methods such as those used by the in-memory databases. Data stored and analyzed using this approach scale better and provide improved superior performance.

The data model used in this approach, addresses several issues that the traditional RDBMS model is not designed to address, namely large quantities of unstructured, semi-structured, and structured data all to be analyzed in unison.

Analytics products for the desktop makes use of a different approach by supporting mid-tier reporting in an accelerated way by pushing relational data from Excel into the memory of the computer and using an in-memory repository for analysis. This approach analyses the data at a cell level instead of a row by row resulting in exponentially faster data analysis.

When used in conjunction with on-demand queries, the data used by desktop analytics in Excel workbooks can be refreshed on demand from OLTP systems like ERP and CRM but this also leads to data proliferation which may be an unwanted side effect. Far better then, to use an integrated analytics approach that doesn’t require replication or data proliferation.

Even using this integrated approach, data sets used as part of the analysis can be blended from other sources making this more than a tool simply for extending ERP or CRM reporting.

About the author
eye

Clinton Jones has experience in international enterprise technology and business process on four continents and has a focus on integrated enterprise business technologies, business change and business transformation. Clinton also serves as a technical consultant on technology and quality management as it relates to data and process management and governance. In past roles, Clinton has worked for Fortune 500 companies and non-profits across the globe.

Published by

Clinton Jones

Clinton has experience in international enterprise technology and business process on five continents and has a focus on integrated enterprise business technologies, business change and business transformation with a particular focus on data management. Clinton also serves as a technical consultant on technology and quality management as it relates to data and process management and governance. In past roles, he has worked for Fortune 500 companies and non-profits across the globe.

Leave a Reply

Your email address will not be published. Required fields are marked *