Welcome!

@CloudExpo Authors: Yeshim Deniz, Zakia Bouachraoui, Liz McMillan, Elizabeth White, Pat Romanski

Related Topics: @CloudExpo, Microservices Expo, Open Source Cloud

@CloudExpo: Article

More Use Cases for Big Data Analytics

Measuring Development Productivity with Hadoop

After its initial start in research work and in social network sites Hadoop is now becoming a big part of the enterprise IT landscape. There were recent announcements from Microsoft about embracing Hadoop as part of its Windows Azure High Performance Computing initiative and from Oracle regarding new options like Oracle Loader support for Hadoop-processed data.

Initial Use Cases for Hadoop
The following are typical use cases that can be realized with the power of Hadoop:

  • Analyzing customer web usage towards predicting what would be of interest to the customer and target advertisements accordingly
  • Detecting fraud in online systems based on various behavioral patterns
  • Market and customer segmentation
  • Recommendation engines - increase an average order size by recommending complementary products based on predictive analysis for cross-selling.
  • You can visit the Cloudera site, which distributes Hadoop, along with various support options to suit to the enterprise to learn more about the Hadoop use cases: http://www.cloudera.com/why-hadoop/

You can also refer to my earlier article on Traditional vs Big Data Analytics on various enterprise class use cases that can be realized using big analytical tools like Hadoop.

Providing Real-Time Dashboards for Development Productivity
While most of the above use cases are about runtime benefits to the enterprise, we do find that Hadoop, if used properly, can provide much-needed insight to the development teams by providing valuable dashboards to program managers and directors about the team's productivity and where they stand with respect to code quality, code coverage and whether code can meet the required deadlines with respect to the development life cycle. Let's analyze how this can be enabled with proper usage of Hadoop.

Large application developments happen, especially when your organization is developing products or other large custom applications. As a program manager you want to get a real-time dashboard of how your development teams are progressing. The following live information may provide you with lot of insight to track the projects:

  • Lines of code (a measure of function points that also provides an idea of functional coverage of the system)
  • Code Coverage %, i.e., the percentage of code that is covered through various unit test cases.
  • Types of exception generated during unit testing, whether they are application related or system related, for example, if during development there is lot of application-related exceptions, this may be an indication that the development team does not fully know the functionality.
  • Code quality analysis - whether code is not having any audit- or metric-related issues like depth of inheritance, cyclomatic complexity, etc.
  • Traceability of application modules to requirements.
  • Whether the build process is failing to integrate the code; if so where are all the dependencies.
  • Whether the development team is following the standards with the code conventions and development standards.

Currently most of the program managers are dependent on weekly meetings with the developers to derive this information and are subject to interpretation by individual developers. The main problem is that the above mentioned metrics are scattered in multiple log files and with a large development team, this may run into a huge volume of unstructured text. Some of the following log files will be of interest in this case:

  • Source code stored in various repositories
  • Eclipse or Visual Studio Log Files generated during development and unit testing
  • Log files generated by the test tools like JUnit
  • Logging information generated by the application servers and web servers during development as the developers will likely turn on their LOG4J or equivalent logging mechanisms
  • Debugging information generated by built-in tools like Eclipse or Visual Studio
  • Logs generated by the code quality analysis tools
  • Logs generated by code vulnerability scanning tools
  • Logs generated by build environments like Ant or cruise control or the equivalent

Typically Hadoop can be used to analyze these large amounts of unstructured log files and the output can be utilized to create dashboards in real time for the program managers.

Summary
The success of this use of Hadoop depends on the technical implementation of map and reduce functionalities that will act on the huge set of log files listed above from each developer's machine. However, considering the fact that similar algorithms have been implemented for various web-based log analytics, this implementation should not be too difficult. If implemented properly this can provide a real-time dashboard for program managers to monitor the performance of the development team and take corrective actions.

More Stories By Srinivasan Sundara Rajan

Highly passionate about utilizing Digital Technologies to enable next generation enterprise. Believes in enterprise transformation through the Natives (Cloud Native & Mobile Native).

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


CloudEXPO Stories
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by sharing information within the building and with outside city infrastructure via real time shared cloud capabilities.
When building large, cloud-based applications that operate at a high scale, it's important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. "Fly two mistakes high" is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, discussed how this same philosophy can be applied to highly scaled applications, and can dramatically increase your resilience to failure.
CloudEXPO New York 2018, colocated with DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a multi-faceted approach of strategy and enterprise business development. Andrew graduated from Loyola University in Maryland and University of Auckland with degrees in economics and international finance.