Enabling Self-service Delivery to Meet the Immediate Need for Remote Knowledge

The demand from business users for data analytics has long surpassed the speed at which IT can deliver safe tools to help them. So years ago, they started to help themselves.

A recent study by Dresner Advisory Services found 62% of business users now view self-service analytics as critical or very important.

Self-service analytics has grown far more sophisticated in recent years. Organizations now have the potential to empower users with feature-rich capabilities while reducing IT costs. Yet many IT leaders don’t understand exactly what self-service analytics can do or the best way to deliver them to users.

CAI, which has provided business technology services for two decades, examined this area in the final segment of its six-part series, “Activating Data & Analytics for Real Business Value Creation.” This summary highlights key points from that webinar, “Enabling Self-service Delivery for Remote Knowledge Needs.”

The entire series features Steven Stone, CEO & Founder of NSU Technologies, and Tom Villani, Senior Vice President, Digital Innovation, at CAI. Stone, a former CIO at Lowe’s and L Brands, shares real-world examples and proven solutions to help companies accelerate their transformations. We invite you to review past sessions for free at and to contact for a complimentary Data & Analytics Readiness Assessment.


The Case for Self-service Analytics

Before trying to provide self-service analytics, it’s crucial to understand exactly what they are—and what they aren’t. For years, many IT professionals watched ambitious non-IT users build their own data tools with Excel or SharePoint. Many yielded insights, but they also created some new problems.

Years ago at L Brands, for example, the central repository for production reports had grown to over 101,000 reports. However, only about 3% of the reports had been run more than once in the prior year, and less than 1% were executed on a regular basis. There were no naming standards, no consistency in report folders, and most reports were simply copies of other reports. As a result, finding the right report to use was a cumbersome and frustrating exercise. With proper planning, we can avoid these types of problems and offer users a much better self-service analytics experience.

Self-service analytics is more than a tool. It’s really content management in which the data is the content. A comprehensive self-service analytics framework that helps to describe the data, define ways to connect to it, offer tools to analyze it, and, importantly, establish protocols to share insights and analysis safely across the organization.

This approach generates very tangible benefits such as supporting better decisions, generating insights far more quickly, and enhancing organizational agility through improved collaboration. These benefits are the primary reason Gartner estimates the number of so-called “citizen data scientists” will grow at five times the rate of skilled data scientists through 2021.

In a sense, self-service analytics is helping to reduce costs and speed analytical processes by taking out the IT middleman. Instead of relying on an IT professional to update a BI tool, self-service business users are empowered to address their own needs with high-quality data, data discovery, valuation tools, and other capabilities.

A proper framework includes a strong semantic layer that provides a common language that enables access to high-quality data for analysis with a wide variety of tools like data discovery, data manipulation, visualization, desktop productivity, or BI tools. It is important to realize that self-service analytics is more than simply a tool to generate insights.

By 2021, self-service analytics and BI users will produce more analysis than Data Scientists. (Gartner)

Insights that do not lead to actions (or decisions) are not really insights. It is imperative to map insights into decisions and measure the impact of those decisions against organizational KPI’s. A comprehensive self-service life cycle governs the creation, prototyping, pilots, and production of analytics. The life cycle will clearly define user roles and rights regarding the generation of analytics content, verification, promotion, and sharing of analytics with team members.


Enabling Self-service Analytics

In past sessions, we stressed the importance of governance. In a self-service world, you need even more governance to establish “guard rails” that allow users to work safely, in an agile manner. Self-service governance will be role-based with different users possessing different rights and privileges based on their level of competence and responsibility.

Systematic controls will be established to verify user rights for the creation, verification, and promotion of content to prevent propagating information before it has been properly vetted. And those controls just apply to all layers of the framework. These layers include:

  • The Data Layer
  • The Information Layer
  • The Analytics Layer

The data layer includes test environments called sandboxes and production. The sandbox environment enables verified users to build prototypes and test them in ways that encourage innovation and experimentation. To do this, users will often need to perform some type of data enrichment to enhance or refine data. Enrichment be as simple as creating a new metric on a dashboard or as involved as pulling in third-party data sources to be used in conjunction with existing organization data. A key element of the data layer is to ensure performance, which will often dictate the type of physical technology architecture needed to persist data. The information layer, or data fabric, consists of many components that operate together in a unified manner. This layer may include connectors to different data sources, data management, API management, and security. Users will often interact with the data fabric through the semantic layer which, in a self-service deployment, can include concepts such as data visualization. The semantic layer enables the ability to search using business terms common to that organization.

By leveraging an adaptive analytics fabric, enabling self-service analytics throughout an organization is 56% faster, cheaper, and more efficient. (Enterprise Strategy Group)

APIs can augment the semantic layer, providing a consistent and secure manner to access data. For example, at L Brands a common set of APIs was developed to connect to key data subject matters such as customer, product, and inventory. These APIs were exposed to users through a common repository. The APIs greatly simplified data access for users while providing IT with greater control over performance and enabling the rationalization of many reports that were used to only pull data down to end-user productivity tools. The analytics layer includes all the tools various users will need to develop and share analytics. To avoid confusion, it is important to recognize and mitigate the analytic tool overlaps. A great way to do this is to limit tools by category or use case.

Search and navigation tools help users find what’s already been built and ensure they don’t build it again. Strong search and navigation will greatly enhance productivity and decrease user frustration.

Creating new analytics may require data preparation. Tools like Alteryx, Paxata, or Trifacta are gaining market share with their ability to access, distribute, and wrangle data. Data discovery tools are often used by “power users” who need to explore data from multiple sources. Visualization tools are often used by power users and developers to build visual dashboards for executives and other non-technical users. Finally, enterprise dashboard and reporting tools such as Power BI, MicroStrategy, and Tableau are used to develop and distribute standardized corporate reporting.

Development environments for machine learning and artificial intelligence are becoming increasingly important. Most of the large cloud providers have excellent products for streamlining the process to build, train, and deploy models.

The development of content across all of these tools is typically reserved for more advanced skill levels, with the resulting insights consumed across the organization. Another key element of the analytics layer is enabling collaboration to provide instant access to knowledgeable resources who can help those working to solve a problem. This can enable collaborative analytics, which is a team-based approach to the problem-solving process.

One critical element of the analytics layer isn’t a tool at all, but rather a process called data storytelling. A typical data story will contain data, visuals, and narrative. The narrative provides the script or explanation for what is happening in the data. The visualizations highlight the patterns and anomalies in the data that would not otherwise be discovered. Data storytelling has proven to be more effective than reporting, dashboards, or visualizations alone in conveying memorable, persuasive, and engaging analytic experiences. Data storytelling is rapidly becoming a required skill set across the professional landscape.


Putting Self-service into Action

We’ve discussed different types of users and how they each have different needs. A frequently mentioned, but more frequently misunderstood concept for analytics is the development of personas. It’s helpful to think of personas as hypothetical characters, each of whom represents a different class of user. In this case, we’ll identify four personas used by Gartner to define a self-service analytics environment: consumers, explorers, innovators, and experts.

The biggest group when we’re talking about self-service, of course, includes the Consumers who account for 80–90% of your organization. The consumer is conversationally literate in data and uses it to make decisions. As the name implies, they consume analytics, but they don’t normally build their own analytics. this group might include executives, frontline associates, even our customers.

An example of a Consumer persona is a Sales Executive. The Sales Executive receives a dashboard on their tablet. They might click on a dashboard element and drill down into the data. They may also receive alerts on their tablets or cell phones when certain data thresholds are met. to the Sales Executive will not build apps or modify analytics, so they will not need access to the sandbox or development tools.

Explorers, by contrast, may include 10–15% of your organizations. Explorers leverage diagnostic analytics to drill into data to find patterns or facts to support business decisions. An example of an Explorer persona is a department manager who leverages diagnostic analytics to drill into the data to support daily business decisions about inventory, pricing, and such. Explorers are very data literate, with a good understanding of the key metrics supporting their business area. The Explorer is comfortable using BI tools such as Tableau or Power BI to modify existing reports or dashboards to add details to clarify trends and patterns. However, they are not well equipped to build a report completely from scratch.

Moving up a step, we come to the Innovators, who typically account for less than 5% of users, but are constantly looking for ways to improve operations. An innovator might work as a senior business analyst who spends most of their time on a laptop developing specific analytics to answer new/unique questions from senior management. They’re very comfortable using tools for data preparation, data discovery, visualization, BI, and may even dabble in predictive modeling. They’re also excellent storytellers. Innovators spend a lot of time in the sandbox building and self-validating analytics. Once they’re satisfied, they’ll request their work, along with any enriched data, be promoted to a level of production to be consumed by others.

Finally, we have the less-than-1% of users we call the Expert. These are highly skilled data scientists or engineers at the top of areas like testing theories and algorithmic models. They’re the apex predators of data. The experts work with business and IT professionals to understand a business problem, then identifying the appropriate data or analytics to solve it. For larger problems, they may enlist the help of data engineers or IT to perform elements of the data preparation but are capable of doing this themselves if needed. They may also be called on to verify the accuracy and quality of work done by others.

Understanding these personas, or others that may surface in your organization is a key to building a sustainable, successful, self-service analytics environment. Analytics technology and process bring critical insight to the surface. Personas, and the way each of them interacts with insight, establish the expedited route to taking successful actions. Building and deploying based on personas enable the definition and refinement of user experiences that create enthusiasm for analytics, not frustration.

Have additional questions?

We'd love to talk.

Get in Touch



Enabling Self-service Delivery to Meet the Immediate Need for Remote Knowledge

Watch now