Welcome!

@CloudExpo Authors: Yeshim Deniz, Elizabeth White, Liz McMillan, Zakia Bouachraoui, Pat Romanski

Related Topics: @DevOpsSummit, Microservices Expo, Containers Expo Blog

@DevOpsSummit: Blog Post

How to Ease API Testing Constraints | @DevOpsSummit [#API #DevOps]

The top API testing issues that organizations encounter and how automation and a DevOps team approach can address them

Ensuring API integrity is difficult in today's complex application cloud, on-premises and hybrid environment scenarios. In this interview with TechTarget, Parasoft solution architect manager Spencer Debrosse shares his experiences about the top API testing issues that organizations encounter and how automation and a DevOps team approach can address them.

The following is an excerpt from that interview...

What makes testing APIs challenging?
When you're building an application, you're not just using your own APIs or your own internal applications. Instead, you have to rely on a wide variety of endpoints and APIs and databases. We see lots of industry-specific, third-party API integration. For example, in the hospitality and airline industry, Sabre is common; in retail, credit card/address API verification is common.

If I integrate with Facebook or integrate with other applications, how can I tell if those APIs are in the state that I need them to be in, are available on my release schedule and are going to be functioning the way that I need?

That's really why availability's a constant problem, because we have all these pieces that are moving. Developers, as well as testers and QA architects, need to get all those pieces in sync to optimize their release schedule.

How does a business's organizational structure hamper API testing?
Access to internal resources can be a challenge. Frequently, all of these resources are controlled and managed by different groups. If I'm a developer building an application, I will work in an environment containing lots of groups that I rely on. It's not just my own development effort. These internal resources may be unreliable or I may have little control over them. Many financial organizations face internal testing bottlenecks associated with mainframe access, for example.

In another example, as a developer, I may rely on a database maintained by a DBA on a separate team. Or I could rely on an API maintained by a different group of developers (and these developers may or may not be part of my organization). This disconnect between who needs an API for testing and who controls the API means that my test environment will commonly be a bottleneck in my development or QA process.

Is testing APIs more difficult from the availability standpoint than testing software, or are there no differences?

The increased focus on mobile development and interconnectivity of applications means that testing in just about any application development project will rely heavily on API integration. So, more API testing is being done than in the past, and that adds another layer of work for quality assurance teams. Otherwise, there is little difference in resource availability problems for API and application testers in modern development.

How are development organizations addressing this API test problem?

From a process standpoint, they're using DevOps to provide more collaboration and fewer constraints for API testers. DevOps, in particular, facilitates "shifting left", where testing is done earlier...

***

The complete article continues to discuss:

  • What test-driven development means from a business perspective
  • What technologies are needed to reduce pressure on API testers
  • Why and how to perform a large number of validations very quickly at an early stage of the SDLC
  • Tips for capturing the data needed to make intelligent business decisions.

You can read the complete Ways to Ease API Testing Constraints article here (no registration required).

API Testing Resource Center
To access a host of API testing resources that can help you better understand and apply API testing best practices, see Parasoft's API Testing Resource Center. Here, you'll find resources such as:

About Parasoft API Testing
Parasoft's API Testing solution is widely recognized as the leading enterprise-grade solution for API testing and API integrity. Thoroughly test composite applications with robust support for REST and web services, plus an industry-leading 120+ protocols/message types.

Why choose Parasoft for API Testing?

  • Industry leader since 2002
  • Recognized for "ease for use", intuitive interface
  • Advanced intelligent automated test generation
  • Extensive protocol and technology support
  • End-to-end testing across multiple endpoints (services, ESBs, databases, mainframes, web UI, ERPs...)
  • Designed to support continuous testing

More Stories By Cynthia Dunlop

Cynthia Dunlop, Lead Content Strategist/Writer at Tricentis, writes about software testing and the SDLC—specializing in continuous testing, functional/API testing, DevOps, Agile, and service virtualization. She has written articles for publications including SD Times, Stickyminds, InfoQ, ComputerWorld, IEEE Computer, and Dr. Dobb's Journal. She also co-authored and ghostwritten several books on software development and testing for Wiley and Wiley-IEEE Press. Dunlop holds a BA from UCLA and an MA from Washington State University.

CloudEXPO Stories
Even if your IT and support staff are well versed in agility and cloud technologies, it can be an uphill battle to establish a DevOps style culture - one where continuous improvement of both products and service delivery is expected and respected and all departments work together throughout a client or service engagement. As a service-oriented provider of cloud and data center technology, Green House Data sought to create more of a culture of innovation and continuous improvement, from our helpdesk on to our product development and cloud service teams. Learn how the Chief Executive team helped guide managers and staff towards this goal with metrics to measure progress, staff hiring or realignment, and new technologies and certifications.
Technology has changed tremendously in the last 20 years. From onion architectures to APIs to microservices to cloud and containers, the technology artifacts shipped by teams has changed. And that's not all - roles have changed too. Functional silos have been replaced by cross-functional teams, the skill sets people need to have has been redefined and the tools and approaches for how software is developed and delivered has transformed. When we move from highly defined rigid roles and systems to more fluid ones, we gain agility at the cost of control. But where do we want to keep control? How do we take advantage of all these new changes without losing the ability to efficiently develop and ship great software? And how should program and project managers adapt?
When Enterprises started adopting Hadoop-based Big Data environments over the last ten years, they were mainly on-premise deployments. Organizations would spin up and manage large Hadoop clusters, where they would funnel exabytes or petabytes of unstructured data.However, over the last few years the economics of maintaining this enormous infrastructure compared with the elastic scalability of viable cloud options has changed this equation. The growth of cloud storage, cloud-managed big data environments, and cloud data warehouses like Snowflake, Redshift, BigQuery and Azure SQL DW, have given the cloud its own gravity - pulling data from existing environments. In this presentation we will discuss this transition, describe the challenges and solutions for creating the data flows necessary to move to cloud analytics, and provide real-world use-cases and benefits obtained through adop...
Docker and Kubernetes are key elements of modern cloud native deployment automations. After building your microservices, common practice is to create docker images and create YAML files to automate the deployment with Docker and Kubernetes. Writing these YAMLs, Dockerfile descriptors are really painful and error prone.Ballerina is a new cloud-native programing language which understands the architecture around it - the compiler is environment aware of microservices directly deployable into infrastructures like Docker and Kubernetes.
Your applications have evolved, your computing needs are changing, and your servers have become more and more dense. But your data center hasn't changed so you can't get the benefits of cheaper, better, smaller, faster... until now. Colovore is Silicon Valley's premier provider of high-density colocation solutions that are a perfect fit for companies operating modern, high-performance hardware. No other Bay Area colo provider can match our density, operating efficiency, and ease of scalability.