Welcome!

@CloudExpo Authors: Liz McMillan, Zakia Bouachraoui, Yeshim Deniz, Pat Romanski, Elizabeth White

Related Topics: @CloudExpo, Java IoT, @DevOpsSummit

@CloudExpo: Blog Feed Post

Six Reasons Your API Is the Windows Vista of APIs By @JustinRohrman | @CloudExpo #API #Cloud

If you've developed an API, it exposes some functionality to users

Six Reasons Your API Is the Windows Vista of APIs
by Justin Rohrman

Does your API suck? Okay, that one needs a little explanation.

If you've developed an API, it exposes some functionality to users. It might suck to learn. The documentation might be unclear and the function signatures counter-intuitive. It might suck to use, doing a lot of things, but never particularly what you really need, right now.

After a great deal of working with companies developing new API functionality, and also building out demo material from publicly available APIs (starting with the thought "this should be easy ...") I have developed some opinions on the subject. Just like a restaurant that doesn't pay attention to detail, an awkward API can have a dozen small things that add up to a big problem. Misplaced silverware, a long wait time, a slow waiter, details wrong on the order ... no one of these will make you want to stand up and leave, but put together, they'll make sure you never come back.

No one thing will make your API suck, yet, just like the restaurant, there is an additive effect. Let's look at a few reasons your API might turn people off and figure out how to get them back.

1. Documentation
As a consumer of APIs, this is the first place I go to see what's going on. Hopefully the documentation has all the details I need on authentication and creating tokens, required headers and query strings, sample paths and results. Ideally, the API has a complete demo in every reasonably calling language - not just how to call the code through the web, but how to call the code in python, and how to get the support libraries you used. When I read API documentation, inevitably, at least one of these things is missing, and another is out of date and I get sent on a scavenger hunt for a new path or a header that could have been described more clearly. Problems like this don't ruin my day, but they sure don't make it better.

We live in the future; there are plenty of tools available to create the function signature and automatically update documentation each time a build runs. Documentation frameworks like Swagger are leading the way making documentation increasingly simple.

2. User Experience
Pinpointing users and what they value for run of the mill software is difficult. Everyone has their own goals, needs, and desires. This is equally challenging. For API testing, we have to consider both the end user (the person searching for books from Amazon) but also other developers, the people who build their own book sub-sites powered by Amazon. Also, the internal the ops team may needs to get information on how the API works. User experience at the second and third level is a little different.

I was building a few examples for API testing based a popular virtual Kanban tool by reviewing the documentation for their endpoints. One endpoint would to return a list of cards for a user, one returned all cards on a board, and one more that would return the contents of the cards for a user. The paths for these were subtly different and I ended up fumbling over them for an hour figuring out what was what. Sure, I could have reread the docs five more times to figure out why I wasn't getting the results I wanted. But, having paths more clearly defined would help too.

3. Lack of Hypermedia
Imagine developing an iOS app built on top of someone else's API. Eventually the developers of that API are going to want to make changes, sometimes this results in changing the paths to endpoints.

The result here is that everyone depending on that API has to update their code to adapt to the changes. Hopefully, they find out about the new version before complaints from users start flying into inboxes.

One way to reduce this strain is through usage of hypermedia.

My colleague, Ben Ramsey, says this:

"When an API uses hypermedia, the URLs are no longer important. Clients talking to the API do not need to code to URLs because the API will always convey where to go next through hypermedia relationships. If a URL changes, then there's no problem. The change gets communicated through the API. This leads to a more flexible and evolvable API that can change over time without needing to update all the clients."

Hypermedia simplifies API usage for your users. Instead of POSTing to example.com/api/v1/users/new, you POST to example.com/api/v1 and include a special reference inside of the data you send.

4. Authentication
Your data is the most important part of any non-trivial piece of software. That of course means that the data is held (hopefully) safety behind a wall that requires a username and password to get access. Sometimes, this is no big deal. I POST a message to the authentication endpoint with my username and password and in return get a token that I can use to authenticate and do the things I want to do.

Other times, I have to write an oAuth wrapper to handle authentication, which can be a big mess.

On behalf of API customers everywhere - please do not make me create a big mess.

If you have to create a complex authentication system, that's okay, just document how to get authorized, with sample code, in the software documentation. Ideally write a package that gets the token for the user in a few languages and a little psuedo-code on how to write them in their own. Stopping with a link to someone else's "easy" example that only works in C# or Java or obscures a step or two and requires more google searches, will guarantee confusion, frustration, and a lower adoption rate.

5. Headers, And Bodies, and Bears
APIs exist as a way to talk to software, we use them to send and receive data. Sometimes that data travels over the wire as a blob of JSON or XML, and sometimes the data gets passed through the URL in the form of a query string. Sometimes it is a combination of both of these things. One popular way to handle this is to send the data that authenticates a user as part of the URL query string, and the data you want to create or update as part of a JSON or XML blob.

Imagine working with an API that uses a combination of query strings throughout the software. A normal POST might look like:

POST example.com/api/v1/users/new?token=123456&newUser=userName

Required along with that was a JSON body with all of the other details on the new user. I bet the first few POSTs to this will fail while you learn that part of the new user is sent in the query string.

The most important thing here is to be consistent. Having to figure out that an endpoint won't work because, unlike everywhere else everything goes in the URL, takes time and builds frustration.

6. To Err Is Human
Everyone makes mistakes, and I've made plenty when trying to write JSON to POST to an endpoint. Documentation can help me figure out the format and specific nodes I need in a JSON body, but it probably won't help me find the typo that is causing your API to reject my POST. HTTP responses are pretty typical, and help to some degree. These will at minimum let me know the category of the mistake I've made.

Even better than that would be an error like this that points to the problem.
{

[errors:{"uname is a required field"}]

}

Your API probably doesn't suck, most people don't really have all of these problems all at the same time. Talking to your testers is a good way to start finding improvements. What problems are they having, and what is slowing them down everyday? They might point to a few of the ideas I have been talking about here, or maybe they will shed light on a new category of API problems.

How have you improved your API lately, let us know in the comments!

Read the original blog entry...

More Stories By SmartBear Blog

As the leader in software quality tools for the connected world, SmartBear supports more than two million software professionals and over 25,000 organizations in 90 countries that use its products to build and deliver the world’s greatest applications. With today’s applications deploying on mobile, Web, desktop, Internet of Things (IoT) or even embedded computing platforms, the connected nature of these applications through public and private APIs presents a unique set of challenges for developers, testers and operations teams. SmartBear's software quality tools assist with code review, functional and load testing, API readiness as well as performance monitoring of these modern applications.

CloudEXPO Stories
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-centric compute for the most data-intensive applications. Hyperconverged systems already in place can be revitalized with vendor-agnostic, PCIe-deployed, disaggregated approach to composable, maximizing the value of previous investments.
When building large, cloud-based applications that operate at a high scale, it's important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. "Fly two mistakes high" is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, discussed how this same philosophy can be applied to highly scaled applications, and can dramatically increase your resilience to failure.
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by sharing information within the building and with outside city infrastructure via real time shared cloud capabilities.
As Cybric's Chief Technology Officer, Mike D. Kail is responsible for the strategic vision and technical direction of the platform. Prior to founding Cybric, Mike was Yahoo's CIO and SVP of Infrastructure, where he led the IT and Data Center functions for the company. He has more than 24 years of IT Operations experience with a focus on highly-scalable architectures.