Welcome!

@CloudExpo Authors: Ed Featherston, Rostyslav Demush, Jamie Madison, Jason Bloomberg, Greg Pierce

Related Topics: @CloudExpo, Agile Computing

@CloudExpo: Article

Cloud Computing: Automation's Great - But I Trust Humans More

Opinion: Cloud Computing vs Humans

Brendan Cooper's Blog

We need services like Feedburner and Yahoo Pipes to provide the service they say they will. I know they’re not bound by the kind of service level agreements that would be in place if we were actually paying them, but they surely have to operate within the bounds of, well, operability. Because if they don’t, someone else will.

Repeatedly I’ve found myself being let down by the latest and greatest RSS tools.

I’m not really sure how I can get around this problem, other than by being constantly vigilant - to the extent that it’s almost easier to forget RSS altogether and just monitor ‘by hand;’.

This is how it should work

I’m currently working on the best way to use the various RSS tools out there to create a flexible yet powerful monitoring system. To my mind a good solution is:

* Yahoo Pipes for processing. Yahoo Pipes is an RSS mash-up service, so you can do stuff like take feeds and bring them together, split them apart and filter them, all through a nice graphical interface. It is so flexible and powerful, and you can create really nice modular pipes that slot together like Lego. Want to search for blogs? Insert your ‘blog search engine’ pipe. News? Same. Microblogging? No problem. One of these days I’m going to put a quick tutorial about Pipes on this blog. But not for a while yet. I explain why below.

* Feedburner for future-proofing. Feedburner is an RSS ‘add-on’ service, enabling you to add titles and descriptions to feed but, for my money, the most important feature of which is the feed renaming service. So if you have a feed with the URL http://x.y.com/feeds/asdhJAH72jjaaSS99.xml, just plug that into Feedburner at one end, tell it you want it to be called ‘My lovely feed’ instead, and from then on it also has the URL http://x.y.com/mylovelyfeed. So whatever RSS feed is coming in, it retains the same address. This means that, if you have to use different feeds coming in, you don’t then have to scratch around looking for whatever you had those feeds plugged into. If you know they’re going into Feedburner then you just have to change it there. Everything else stays the same.

* Google Reader for archive and analysis. Google Reader is an online RSS aggregator, so all you need is a Google account and you can use Google Reader’s very powerful features with no installs or upgrades needed. So, you’ve got a cool RSS feed created from Yahoo Pipes, going into Feedburner and retaining the same lovely name. You’re now ready to plug that lovely feed into any other RSS-enabled tool. The next thing you want to do is analyse it, so plug it into Google Reader and suddenly you can filter for ad-hoc queries, star or share items, go through archives, even produce web pages for clients and extra feeds.

* Netvibes for display. Netvibes is an online RSS aggregator too, but while Google Reader’s good if you like lists, sometimes people like columns. So take your lovely RSS feed and this time use Netvibes to create a ‘front end’ for your feeds. So, you monitor the Google Reader stuff, while the client gets to see a really neat dashboard type display. You can add charts and all sorts of bells and whistles.

There just one problem: it doesn’t

I know this can work. I’ve seen it work. But there are frustrations along the way and recently I’m starting to wonder whether these services can be relied upon to work.

A couple of weeks ago I noticed that some critical Feedburner feeds had ‘died’. I was relying on them for data to come through for some important monitoring work. On further inspection I noticed one of the feeds had gone above the 512KB limit for Feedburner, but, while annoying, that didn’t explain the other problems I was having. Other people were commenting on Twitter about similar problems, and about the same time I noticed my Feedburner-enabled subscriptions had halved.

The solution was to bypass Feedburner altogether and just use the Yahoo Pipes addresses instead. But this was far from ideal. I want to use Feedburner for control over the address. I want to feel I can rely on it.

But the real culprit in all this is turning out to be Yahoo Pipes. I have invested considerable time and effort into getting to know it. I’ve got a system that builds queries from keywords, goes out to just about every RSS-enabled social media source I can find, grabs those feeds, filters for them, appends information to the titles and spits them out in virtually any configuration needed. I’ve tested it all, and I know it works.

But about two weeks ago I noticed Yahoo Pipes getting sluggish. It didn’t matter what I was using to access it - my PC at home, my laptop at work, IE, Firefox, whatever.

And this weekend, I can’t do anything with it. I need to add some tweaks to the system, but it either times out, or refuses to save my changes.

I mean, as I said earlier, I’d love to pass on some of what I’ve learned on this blog. But I cannot. Even when the system is working it’s just too slow. I find I’m wandering off to stroke the cat or do the crossword while Yahoo Pipes churns away.

So again, I have to ask: can I rely on it?

I can’t see the silver lining for the cloud

This is, of course, a criticism of cloud computing. While I absolutely love the idea of harnessing the power afforded me by Yahoo’s server farms to do weird and wonderful things with RSS, I hate, detest and loathe the notion that I’m totally dependent on them to be able to do so.

If, as has been happening for the past few months, I continue to creep into the cloud, I know that one day I’ll have really seriously important stuff in, say, a spreadsheet on Google Docs, that I cannot access when it’s critical that I can access it. Or I’ll get into trouble with a client because they’ll blame me for not making sure their RSS feeds are working properly.

Is the answer that I just don’t put all my trust in these services? Do I keep local versions of docs, just in case? In which case, what do I do for RSS monitoring? I mean, can I pay someone money to give me a better service? Is that the real issue here?

Perhaps the Luddites are right

So, to get back to my original point: at what point do I totally lose faith in these services?

I’ve spent enough time testing my systems to know that they work. The problem is, the services themselves don’t seem to work properly.

So do I monitor constantly and vigilantly to make sure everything is tickety-boo? Do I just hope that, come the day I’m dependent on Yahoo Pipes to work, and it doesn’t, I can quickly think of a workaround as I did the other day?

Or do I eventually decide that actually, it’s more reliable and in the long run more cost-effective simply to monitor individual blogs by visiting them on a daily basis? I mean, there’s something to be said for this. I would certainly get to know those bloggers more intimately. But this solution just doesn’t scale up. It’s not workable.

No. We need services like Feedburner and Yahoo Pipes to provide the service they say they will. I know they’re not bound by the kind of service level agreements that would be in place if we were actually paying them, but they surely have to operate within the bounds of, well, operability.

Because if they don’t, someone else will. I’ve already been checking out Microsoft’s Popfly mashup creator today to see if it can do what Yahoo Pipes should. And it’s already looking promising. We’ll see.

 

[This appeared originally here and is republished in full with the kind permission of the author, who retains copyright.]

More Stories By Brendan Cooper

Brendan Cooper is a Digital Senior Account Manager at Fleishman-Hillard in London, UK. He has spent the past 20 years laboring at various communications coalfaces. A graduate of the Information Systems Institute at Salford University, in the UK, he was a founder member of sharepages.com and, from that, Knowledge Technology Solutions PLC.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@CloudExpo Stories
DX World EXPO, LLC, a Lighthouse Point, Florida-based startup trade show producer and the creator of "DXWorldEXPO® - Digital Transformation Conference & Expo" has announced its executive management team. The team is headed by Levent Selamoglu, who has been named CEO. "Now is the time for a truly global DX event, to bring together the leading minds from the technology world in a conversation about Digital Transformation," he said in making the announcement.
"Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that Conference Guru has been named “Media Sponsor” of the 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. A valuable conference experience generates new contacts, sales leads, potential strategic partners and potential investors; helps gather competitive intelligence and even provides inspiration for new products and services. Conference Guru works with conference organizers to pass great deals to gre...
DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production, but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE). Rob Hirschfeld’s recent work in Kubernetes operations has led to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
The next XaaS is CICDaaS. Why? Because CICD saves developers a huge amount of time. CD is an especially great option for projects that require multiple and frequent contributions to be integrated. But… securing CICD best practices is an emerging, essential, yet little understood practice for DevOps teams and their Cloud Service Providers. The only way to get CICD to work in a highly secure environment takes collaboration, patience and persistence. Building CICD in the cloud requires rigorous ar...
Companies are harnessing data in ways we once associated with science fiction. Analysts have access to a plethora of visualization and reporting tools, but considering the vast amount of data businesses collect and limitations of CPUs, end users are forced to design their structures and systems with limitations. Until now. As the cloud toolkit to analyze data has evolved, GPUs have stepped in to massively parallel SQL, visualization and machine learning.
"Evatronix provides design services to companies that need to integrate the IoT technology in their products but they don't necessarily have the expertise, knowledge and design team to do so," explained Adam Morawiec, VP of Business Development at Evatronix, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
"ZeroStack is a startup in Silicon Valley. We're solving a very interesting problem around bringing public cloud convenience with private cloud control for enterprises and mid-size companies," explained Kamesh Pemmaraju, VP of Product Management at ZeroStack, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Large industrial manufacturing organizations are adopting the agile principles of cloud software companies. The industrial manufacturing development process has not scaled over time. Now that design CAD teams are geographically distributed, centralizing their work is key. With large multi-gigabyte projects, outdated tools have stifled industrial team agility, time-to-market milestones, and impacted P&L stakeholders.
Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native applications. However, sharing a Kubernetes cluster between members of the same team can be challenging. And, sharing clusters across multiple teams is even harder. Kubernetes offers several constructs to help implement segmentation and isolation. However, these primitives can be complex to understand and apply. As a result, it’s becoming common for enterprises to end up with several clusters. Thi...
"Infoblox does DNS, DHCP and IP address management for not only enterprise networks but cloud networks as well. Customers are looking for a single platform that can extend not only in their private enterprise environment but private cloud, public cloud, tracking all the IP space and everything that is going on in that environment," explained Steve Salo, Principal Systems Engineer at Infoblox, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventio...
In his session at 21st Cloud Expo, James Henry, Co-CEO/CTO of Calgary Scientific Inc., introduced you to the challenges, solutions and benefits of training AI systems to solve visual problems with an emphasis on improving AIs with continuous training in the field. He explored applications in several industries and discussed technologies that allow the deployment of advanced visualization solutions to the cloud.
The question before companies today is not whether to become intelligent, it’s a question of how and how fast. The key is to adopt and deploy an intelligent application strategy while simultaneously preparing to scale that intelligence. In her session at 21st Cloud Expo, Sangeeta Chakraborty, Chief Customer Officer at Ayasdi, provided a tactical framework to become a truly intelligent enterprise, including how to identify the right applications for AI, how to build a Center of Excellence to oper...
"IBM is really all in on blockchain. We take a look at sort of the history of blockchain ledger technologies. It started out with bitcoin, Ethereum, and IBM evaluated these particular blockchain technologies and found they were anonymous and permissionless and that many companies were looking for permissioned blockchain," stated René Bostic, Technical VP of the IBM Cloud Unit in North America, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventi...
In his session at 21st Cloud Expo, Carl J. Levine, Senior Technical Evangelist for NS1, will objectively discuss how DNS is used to solve Digital Transformation challenges in large SaaS applications, CDNs, AdTech platforms, and other demanding use cases. Carl J. Levine is the Senior Technical Evangelist for NS1. A veteran of the Internet Infrastructure space, he has over a decade of experience with startups, networking protocols and Internet infrastructure, combined with the unique ability to it...
22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud ...
"Cloud Academy is an enterprise training platform for the cloud, specifically public clouds. We offer guided learning experiences on AWS, Azure, Google Cloud and all the surrounding methodologies and technologies that you need to know and your teams need to know in order to leverage the full benefits of the cloud," explained Alex Brower, VP of Marketing at Cloud Academy, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clar...
Gemini is Yahoo’s native and search advertising platform. To ensure the quality of a complex distributed system that spans multiple products and components and across various desktop websites and mobile app and web experiences – both Yahoo owned and operated and third-party syndication (supply), with complex interaction with more than a billion users and numerous advertisers globally (demand) – it becomes imperative to automate a set of end-to-end tests 24x7 to detect bugs and regression. In th...