Knowledge and Data management
Real Time BI
The key to a successful real-time business intelligence project is to find the right balance between how fast business users want BI data to be delivered to them and how much it would cost to achieve those results. When business executives know the expected cost of building and running a real-time BI system that meets user expectations for data delivery speeds, they “can decide how badly they want [the data] that fast” and the plans can be changed as needed to better align with budget priorities and realities.
- What it really means to provide real-time BI capabilities
- The technologies that companies are using as part of their real-time BI and analytics programs, and the two main approaches for delivering real-time data
- Some major misconceptions shrouding real-time BI projects
Business intelligence (BI) systems and their supporting data warehouses are only as good as the data that goes into them. And if you aren’t properly handling the BI data integration process, your end users — and ultimately, your organization — may be in for trouble.
With BI tools becoming more and more pervasive in organizations, and more critical to the success of business operations, making sure that you have a well-designed and well-executed process for integrating BI data is of paramount importance
The volumes of data that had to be handled back then seem amusingly modest by today’s “big data” applications standards, with IBM’s 3380 mainframe able to store what seemed like a capacious 2.5 GB of data when it was launched in 1980. To put data into IMS, you needed to understand how to navigate the physical structure of the database itself, and it was a radical step indeed when IBM launched DB2 in 1983. In this new approach, programmers would write in a language called SQL and the database management system itself would figure out the best access path to the data via an “optimiser.”
Enter MPP, columnar and Hadoop
The database industry has responded in a number of ways. Throwing hardware at the problem was one way. Massively parallel processing (MPP) databases allow database loads to be split amongst many processors. The columnar data structure pioneered by Sybase turned out to be well suited to analytical processing workloads, and a range of new analytical databases sprang up, often combining columnar and MPP approaches. The giant database vendors responded with either their own versions or by simply purchasing upstart rivals. For example, Oracle brought out its Exadata offering, IBM purchased Netezza and Microsoft bought DATAllegro. There are also a range of independent alternatives remaining on the market.
Enterprise Data Access
Access and use data sources from across your enterprise
Most organizations have data strewn across a large number of heterogeneous data sources. NTG provides extremely powerful data access, manipulation and management capabilities, and is unmatched in the number of sources and the ease with which structured and unstructured data – including big data – can be accessed and assembled in one environment. Integrating all available data across your enterprise, SAS supports mission-critical business decisions by giving you access to complete, up-to-date and accurate information.
Easy-to-use interfaces enable fast access to data on the most popular platforms, whether in PC files, relational databases, data warehouse appliances, legacy mainframe systems or Hadoop’s distributed file system (HDFS) with visual subsetting of data not possible with other software. SAS also provides the ability to use data from multiple repositories without physically moving or reconciling the data. Support for open standards and messaging-queuing products enable you to read and write data from enterprise applications, as well as use Web services to send and receive information.