Welcome!

@CloudExpo Authors: Yeshim Deniz, Elizabeth White, Liz McMillan, Zakia Bouachraoui, Pat Romanski

Related Topics: Agile Computing, @CloudExpo

Agile Computing: Article

Google Chrome and Business Intelligence in the Cloud

Panorama this week released the latest version of its gadget for Google Docs

What will Business Intelligence be like in the future? "BI in the Cloud" architecture is only going to be feasible when most of your source data lives in the cloud already, possibly in something something like SQL Server Data Services or Amazon Simple DB or Google BigTable; or possibly in a hosted app like Salesforce.com.

The big news last week was of course Google's announcement of Chrome. And as several of the more informed bloggers noted (eg Nick Carr, Tim McCoy), the point of Chrome is to be not so much a browser as a platform for online applications, leading to a world where there is no obvious distinction between online and offline applications.

When I think about applications I think about Business Intelligence applications, and of course thinking about online BI applications and Google I thought of Panorama - which incidentally this week released the latest version of its gadget for Google Docs.

Now, I'll be honest and say that I've had a play with it and it is very slow and there are a few bugs still around. But it's a beta, and I'm told that it's running on a test server and performance will be better once it is released, and anyway it's only part of a wider client tool story (outlined and analysed nicely by Nigel Pendse here) which starts in the full Novaview client and involves the ability to publish views into Google Docs for a wider audience and for collaboration.

I guess it's a step towards the long-promised future where the desktop PC will have withered away into nothing more than a machine to run a browser on, and all our BI apps and all our data will be accessible over the web.

This all makes me wonder what BI will be like in the future...Time for some wild, half-formed speculation:

  • Starting at the back, the first objection raised to a purely 'BI in the cloud' architecture is that you've got to upload your data to it somehow. Do you fancy trying to push what you load into your data warehouse every day up to some kind of web service? I thought not. So I think 'BI in the cloud' architecture is only going to be feasible when most of your source data lives in the cloud already, possibly in something something like SQL Server Data Services or Amazon Simple DB or Google BigTable; or possibly in a hosted app like SalesForce.com. This requirement puts us a long way into the future already, although for smaller data volumes and one-off analyses perhaps it's not so much an issue.

  • You also need your organization to accept the idea of storing its most valuable data in someone else's data center. Now I'm not saying this as a kind of "why don't those Luddites hurry up and accept this cool new thing"-type comment, because there are some very valid objections to be made to the idea of cloud computing at the moment, like: can I guarantee good service levels? Will the vendor I chose go bust, or get bought, or otherwise disappear in a year or two? What are the legal implications of moving data to the cloud and possibly across borders? It will be a while before there are good answers to these questions and even when there are, there's going to be a lot of inertia that needs to be overcome.

    The analogy most commonly used to describe the brave new world of cloud computing is with the utility industry: you should be able to treat IT like electricity or water and treat it like a service you can plug into whenever you want, and be able to assume it will be there when you need it (see, for example, "The Big Switch").

    As far as data goes, though, I think a better analogy is with the development of the banking industry. At the moment we treat data in the same way that a medieval lord treated his money: everyone has their own equivalent of a big strong wooden box in the castle where the gold is kept, in the form of their own data centre. Nowadays the advantages of keeping money in the bank are clear - why worry about thieves breaking in and stealing your gold in the night, why go to the effort of moving all those heavy bags of gold around yourself, when it's much safer and easier to manage and move money about when it's in the bank? We may never physically see the money we possess but we know where it is and we can get at it when we need it. And I think the same attitude will be taken of data in the long run, but it does need a leap of faith to get there (how many people still keep money hidden in a jam jar in a kitchen cupboard?).

  • Once your data's in the cloud, you're going to want to load it into a hosted data warehouse of some kind, and I don't think that's too much to imagine given the cloud databases already mentioned. But how to load and transform it? Not so much of an issue if you're doing ELT, but for ETL you'd need a whole bunch of new hosted ETL services to do this. I see Informatica has one in Informatica On Demand; I'm sure there are others.

  • You're also going to want some kind of analytical engine on top - Analysis Services in the cloud anyone? Maybe not quite yet, but companies like Vertica (http://www.vertica.com/company/news_and_events/20080513) and Kognitio (http://www.kognitio.com/services/businessintelligence/daas.php) are pushing into this area already; the architecture this new generation of shared-nothing MPP databases surely lends itself well to the cloud model: if you need better performance you just reach for your credit card and buy a new node.

  • You then want to expose it to applications which can consume this data, and in my opinion the best way of doing this is of course through an OLAP/XMLA layer. In the case of Vertica you can already put Mondrian on top of it (http://www.vertica.com/company/news_and_events/20080212) so you can already have this if you want it, but I suspect that you'd have to invest as much time and money to make the OLAP layer scale as you had invested to make the underlying database scale, otherwise it would end up being a bottleneck. What's the use of having a high-performance database if your OLAP tool can't turn an MDX query, especially one with lots of calculations, into an efficient set of SQL queries and perform the calculations as fast as possible? Think of all the work that has gone into AS2008 to improve the performance of MDX calculations - the performance improvements compared to AS2005 are massive in some cases, and the AS team haven't even tackled the problem of parallelism in the formula engine at all yet (and I'm not sure if they even want to, or if it's a good idea). Again there's been a lot of buzz recently about the implementation of MapReduce by Aster and Greenplum to perform parallel processing within the data warehouse, which although it aims to solve a slightly different set of problems, it nonetheless shows that problem is being thought about.

  • Then it's onto the client itself. Let's not talk about great improvements in usability and functionality, because I'm sure badly designed software will be as common in the future as it is today. It's going to be delivered over the web via whatever the browser has evolved into, and will certainly use whatever modish technologies are the equivalent of today's Silverlight, Flash, AJAX etc. But will it be a stand-alone, specialised BI client tool, or will there just be BI features in online spreadsheets(or whatever online spreadsheets have evolved into)? Undoubtedly there will be good examples of both but I think the latter will prevail. It's true even today that users prefer their data in Excel, the place they eventually want to work with their data; the trend would move even faster if MS pulled their finger out and put some serious BI features in Excel...

    In the short-term this raises an interesting question though: do you release a product which, like Panorama's gadget, works with the current generation of clunky online apps in the hope that you can grow with them? Or do you, like Good Data and Birst (which I just heard about yesterday, and will be taking a closer look at soon) create your own complete, self-contained BI environment which gives a much better experience now but which could end up being an online dead-end? It all depends on how quickly the likes of Google and Microsoft (which is supposedly going to be revealing more about its online services platform soon) can deliver usable online apps; they have the deep pockets to be able to finance these apps for a few releases while they grow into something people want to use, but can smaller companies like Panorama survive long enough to reap the rewards? Panorama has a traditional BI business that could certainly keep it afloat, although one wonders whether they are angled to be acquired by Google.

So there we go, just a few thoughts I had. Anyone got any comments? I like a good discussion!

 

More Stories By Chris Webb

Chris Webb is an IT Consultant based in London, UK.

Comments (1)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


CloudEXPO Stories
Even if your IT and support staff are well versed in agility and cloud technologies, it can be an uphill battle to establish a DevOps style culture - one where continuous improvement of both products and service delivery is expected and respected and all departments work together throughout a client or service engagement. As a service-oriented provider of cloud and data center technology, Green House Data sought to create more of a culture of innovation and continuous improvement, from our helpdesk on to our product development and cloud service teams. Learn how the Chief Executive team helped guide managers and staff towards this goal with metrics to measure progress, staff hiring or realignment, and new technologies and certifications.
Technology has changed tremendously in the last 20 years. From onion architectures to APIs to microservices to cloud and containers, the technology artifacts shipped by teams has changed. And that's not all - roles have changed too. Functional silos have been replaced by cross-functional teams, the skill sets people need to have has been redefined and the tools and approaches for how software is developed and delivered has transformed. When we move from highly defined rigid roles and systems to more fluid ones, we gain agility at the cost of control. But where do we want to keep control? How do we take advantage of all these new changes without losing the ability to efficiently develop and ship great software? And how should program and project managers adapt?
When Enterprises started adopting Hadoop-based Big Data environments over the last ten years, they were mainly on-premise deployments. Organizations would spin up and manage large Hadoop clusters, where they would funnel exabytes or petabytes of unstructured data.However, over the last few years the economics of maintaining this enormous infrastructure compared with the elastic scalability of viable cloud options has changed this equation. The growth of cloud storage, cloud-managed big data environments, and cloud data warehouses like Snowflake, Redshift, BigQuery and Azure SQL DW, have given the cloud its own gravity - pulling data from existing environments. In this presentation we will discuss this transition, describe the challenges and solutions for creating the data flows necessary to move to cloud analytics, and provide real-world use-cases and benefits obtained through adop...
Docker and Kubernetes are key elements of modern cloud native deployment automations. After building your microservices, common practice is to create docker images and create YAML files to automate the deployment with Docker and Kubernetes. Writing these YAMLs, Dockerfile descriptors are really painful and error prone.Ballerina is a new cloud-native programing language which understands the architecture around it - the compiler is environment aware of microservices directly deployable into infrastructures like Docker and Kubernetes.
Your applications have evolved, your computing needs are changing, and your servers have become more and more dense. But your data center hasn't changed so you can't get the benefits of cheaper, better, smaller, faster... until now. Colovore is Silicon Valley's premier provider of high-density colocation solutions that are a perfect fit for companies operating modern, high-performance hardware. No other Bay Area colo provider can match our density, operating efficiency, and ease of scalability.