RSS

API Containers News

These are the news items I've curated in my monitoring of the API space that have some relevance to the API definition conversation and I wanted to include in my research. I'm using all of these links to better understand how the space is testing their APIs, going beyond just monitoring and understand the details of each request and response.

Kubernetes JSON Schema Extracted From OpenAPI

I’ve been doing my regular trolling of Github lately, looking for anything interesting. I came across a repository this week that contained JSON Schema for Kubernetes. Something that is interesting by itself, but I also thought the fact that they had autogenerated the individual JSON Schema files from the Kubernetes OpenAPI was worth a story. It demonstrates for me, the growing importance of schema in all of this, and shows that having them readily available on Github is becoming more important for API providers and consumers.

Creating schema is an important aspect of crafting an OpenAPI, but I find that many API providers, or the consumers who are creating OpenAPIs and publishing them to Github are not always investing the time into making sure the definitions, or schema portion of them are complete. Another aspect, as Gareth Rushgrove, the author of the Github repo where I found these Kubernetes schema points out, is the JSON Schema in OpenAPI often leaves much to be desired. Until version 3.0 it hasn’t supported everything you need, and many of the ways you are going to use these schema aren’t going to be able to use them in an OpenAPI, and you will need them as individual schema files like Gareth has done.

I just published the latest version of the OpenAPI for my Human Services Data API (HSDA) work, and one of the things I’ve done is extracted the JSON Schema into separate files so I can use them in schema validation, and other services and tooling I will be using throughout the API lifecycle. I’ve setup an API that automatically extracts and generates them from the OpenAPI, but I’m also creating a Github repo that does this automatically for any OpenAPI I publish into the data folder for the Github repository. This way all I have to do is publish an OpenAPI, and there is automatically a page that tells me how complete or incomplete my schema are, as well as generates individual representations that I can use independent of the OpenAPI.

I am hoping this is the beginning of folks investing more into getting their schema act together. I’m also hoping this is something that OpenAPI 3.0 will help us focus on more as well. Pushing API designers, architects, and developers to get their schema house in order, and publish them not just as OpenAPI, but individual JSON Schema, so they can be used independently. I’m investing more cycles into helping folks learn about JSON Schema as I’m pushing my own awareness forward, and will be creating more tooling, training material, and stories that help out on this front. I’m a big fan of OpenAPI, and defining our APIs, but as an old database guy I’m hoping to help stimulate the schema side of the equation, which I think is often just as important.


Containerized Microservices Monitoring Driving API Infrastructure Visualizations

While I track on what is going on with visualizations generated from data, I haven’t seen much when it comes to API driven visualizations, or specifically visualization about API infrastructure, that is new and interesting. This week I came across an interesting example in a post from Netsil about mapping microservices so that you can monitor them. They are a pretty basic visualization of each database, API, and DNS element for your stack, but it does provide solid example of visualizing not just the deployment of database and API resources, but also DNS, and other protocols in your stack.

Netsil microservices visualization is focused on monitoring, but I can see this type of visualization also being applied to design, deployment, management, logging, testing, and any other stop along the API lifecycle. I can see API lifecycle visualization tooling like this becoming more common place, and play more of a role in making API infrastructure more observable. Visualizations are an important of the storytelling around API operations that moves things from just IT and dev team monitoring, making it more observable by all stakeholders.

I’m glad to see service providers moving the needle with helping visualize API infrastructure. I’d like to see more embeddable solutions deployed to Github emerge as part of API life cycle monitoring. I’d like to see what full life cycle solutions are possible when it comes to my partners like deployment visualizations from Tyk and Dreamfactory APIs, and management visualizations with 3Scale APIs, and monitoring and testing visualizations using Runscope. I’ll play around with pulling data from these provides, and publishing to Github as YAML, which I can then easily make available as JSON or CSV for use in some basic visualizations.

If you think about it, thee really should be a wealth of open source dashboard visualizations that could be embedded on any public or private Github repository, for every API service provider out there. API providers should be able to easily map out their API infrastructure, using any of the API service providers they are using already using to operate their APIs. Think of some of the embeddable API status pages we see out there already, and what Netsil is offering for mapping out infrastructure, but something for ever stop along the API life cycle, helping deliver visualizations of API infrastructure no matter which stop you find yourself at.


Making The Business Of Apis More Modular Before You Do The Tech

404: Not Found


Bringing The API Deployment Landscape Into Focus

I am finally getting the time to invest more into the rest of my API industry guides, which involves deep dives into core areas of my research like API definitions, design, and now deployment. The outline for my API deployment research has begun to come into focus and looks like it will rival my API management research in size.

With this release, I am looking to help onboard some of my less technical readers with API deployment. Not the technical details, but the big picture, so I wanted to start with some simple questions, to help prime the discussion around API development.

  • Where? - Where are APIs being deployed. On-premise, and in the clouds. Traditional website hosting, and even containerized and serverless API deployment.
  • How? - What technologies are being used to deploy APIs? From using spreadsheets, document and file stores, or the central database. Also thinking smaller with microservices, containes, and serverless.
  • Who? - Who will be doing the deployment? Of course, IT and developers groups will be leading the charge, but increasingly business users are leveraging new solutions to play a significant role in how APIs are deployed.

The Role Of API Definitions While not every deployment will be auto-generated using an API definition like OpenAPI, API definitions are increasingly playing a lead role as the contract that doesn’t just deploy an API, but sets the stage for API documentation, testing, monitoring, and a number of other stops along the API lifecycle. I want to make sure to point out in my API deployment research that API definitions aren’t just overlapping with deploying APIs, they are essential to connect API deployments with the rest of the API lifecycle.

Using Open Source Frameworks Early on in this research guide I am focusing on the most common way for developers to deploy an API, using an open source API framework. This is how I deploy my APIs, and there are an increasing number of open source API frameworks available out there, in a variety of programming languages. In this round I am taking the time to highlight at least six separate frameworks in the top programming languages where I am seeing sustained deployment of APIs using a framework. I don’t take a stance on any single API framework, but I do keep an eye on which ones are still active, and enjoying usag bey developers.

Deployment In The Cloud After frameworks, I am making sure to highlight some of the leading approaches to deploying APIs in the cloud, going beyond just a server and framework, and leveraging the next generation of API deployment service providers. I want to make sure that both developers and business users know that there are a growing number of service providers who are willing to assist with deployment, and with some of them, no coding is even necessary. While I still like hand-rolling my APIs using my peferred framework, when it comes to some simpler, more utility APIs, I prefer offloading the heavy lifting to a cloud service, and save me the time getting my hands dirty.

Essential Ingredients for Deployment Whether in the cloud, on-premise, or even on device and even the network, there are some essential ingredients to deploying APIs. In my API deployment guide I wanted to make sure and spend some time focusing on the essential ingredients every API provider will have to think about.

-Compute - The base ingredient for any API, providing the compute under the hood. Whether its baremetal, cloud instances, or serverless, you will need a consistent compute strategy to deploy APIs at any scale. -Storage - Next, I want to make sure my readers are thinking about a comprehensive storage strategy that spans all API operations, and hopefully multiple locations and providers. -DNS - Then I spend some time focusing on the frontline of API deployment–DNS. In todays online environment DNS is more than just addressing for APIs, it is also security. -Encryption - I also make sure encryption is baked in to all API deployment by default in both transit, and storage.

Some Of The Motivations Behind Deploying APIs In previous API deployment guides I usually just listed the services, tools, and other resources I had been aggregating as part of my monitoring of the API space. Slowly I have begun to organize these into a variety of buckets that help speak to many of the motivations I encounter when it comes to deploying APIs. While not a perfect way to look at API deployment, it helps me thinking about the many reasons people are deploying APIs, and craft a narrative, and provide a guide for others to follow, that is potentially aligned with their own motivations.

  • Geographic - Thinking about the increasing pressure to deploy APIs in specific geographic regions, leveraging the expansion of the leading cloud providers.
  • Virtualization - Considering the fact that not all APIs are meant for production and there is a lot to be learned when it comes to mocking and virtualizing APIs.
  • Data - Looking at the simplest of Create, Read, Update, and Delete (CRUD) APIs, and how data is being made more accessible by deploying APIs.
  • Database - Also looking at how APIs are beign deployed from relational, noSQL, and other data sources–providing the most common way for APIs to be deployed.
  • Spreadsheet - I wanted to make sure and not overlook the ability to deploy APIs directly from a spreadsheet making APIs are within reach of business users.
  • Search - Looking at how document and content stores are being indexed and made searchable, browsable, and accessible using APIs.
  • Scraping - Another often overlooked way of deploying an API, from the scraped content of other sites–an approach that is alive and well.
  • Proxy - Evolving beyond early gateways, using a proxy is still a valid way to deploy an API from existing services.
  • Rogue - I also wanted to think more about some of the rogue API deployments I’ve seen out there, where passionate developers reverse engineer mobile apps to deploy a rogue API.
  • Microservices - Microservices has provided an interesting motivation for deploying APIs–one that potentially can provide small, very useful and focused API deployments.
  • Containers - One of the evolutions in compute that has helped drive the microservices conversation is the containerization of everything, something that compliments the world of APis very well.
  • Serverless - Augmenting the microservices and container conversation, serverless is motivating many to think differently about how APIs are being deployed.
  • Real Time - Thinking briefly about real time approaches to APIs, something I will be expanding on in future releases, and thinking more about HTTP/2 and evented approaches to API deployment.
  • Devices - Considering how APis are beign deployed on device, when it comes to Internet of Things, industrial deployments, as well as even at the network level.
  • Marketplaces - Thinking about the role API marketplaces like Mashape (now RapidAPI) play in the decision to deploy APIs, and how other cloud providers like AWS, Google, and Azure will play in this discussion.
  • Webhooks - Thinking of API deployment as a two way street. Adding webhooks into the discussion and making sure we are thinking about how webhooks can alleviate the load on APIs, and push data and content to external locations.
  • Orchestration - Considering the impact of continous integration and deployment on API deploy specifically, and looking at it through the lens of the API lifecycle.

I feel like API deployment is still all over the place. The mandate for API management was much better articulated by API service providers like Mashery, 3Scale, and Apigee. Nobody has taken the lead when it came to API deployment. Service providers like DreamFactory and Restlet have kicked ass when it comes to not just API management, but making sure API deployment was also part of the puzzle. Newer API service providers like Tyk are also pusing the envelope, but I still don’t have the number of API deployment providers I’d like, when it comes to referring my readers. It isn’t a coincidence that DreamFactory, Restlet, and Tyk are API Evangelist partners, it is because they have the services I want to be able to recommend to my readers.

This is the first time I have felt like my API deployment research has been in any sort of focus. I carved this layer of my research of my API management research some years ago, but I really couldn’t articulate it very well beyond just open source frameworks, and the emerging cloud service providers. After I publish this edition of my API deployment guide I’m going to spend some time in the 17 areas of my research listed above. All these areas are heavily focused on API deployment, but I also think they are all worth looking at individually, so that I can better understand where they also intersect with other areas like management, testing, monitoring, security, and other stops along the API lifecycle.


KDL: A Graphical Notation for Kubernetes API Objects

I am learning about the Kubernetes Deployment Language (KDL) today, trying to understand their approach to defining their notion of Kubernetes API objects. It feels like an interesting evolution in how we define our infrastructure, and begin standardizing the API layer for it so that we can orchestrate as we need.

They are standardizing the Kubernetes API objects into the following buckets:

Cluster - The orchestration level of things. Compute - The individual compute level. Networking - The networking layer of it all. Storage - Storage behind our APIs.

This has elements of my API lifecycle research, as well as a containerized, clustered, BaaS 2.0 in my view. Standardizing how we define and describe the essential layers of our API and application infrastructure. I could also see standardizing the testing, monitoring, performance, security, and other critical aspects of doing this at scale.

I’m also fascinated at how fast YAML has become the default orchestration template language for folks in the cloud containerization space. I’ll add KDL to my API definition and container research and keep an eye on what they are up to, and keep an eye out for other approaches to standardizing the API layer for deploying, managing, and scaling our increasingly containerized API infrastructure.


On Device Machine Learning API Stack

I was reading about Google’s TensorFlowLite in Techcrunch, and their mention of Facebook’s Caffe2Go, and I was reminded of a conversation I was having with the Oxford Dictionaries API team a couple months ago.

The OED and other dictionary and language content API teams wanted to learn more about on-device API deployment, so their dictionaries could become the default. I have asked when we will have containers natively on our routers a while ago, but I’d also like to add to that request–when will we have a stack of containers on device where we can deploy API resources that can be used by applications, and augment the existing on-device hardware and OS APIs?

API providers should be able to deploy their APIs exactly here they are needed. API deployment, management, monitoring, logging, and analytics should exist by default in these micro-containerized environments on any devices. Whether it’s our mobile phones, our automobiles, or the weather, solar, or other industrial device integration, we are going to new API-driven data, ML, AI, augmented, and other resources on-device, in a localized environment.


The Ability To Deploy Your API Heroku Is A Gateway Drug To The Wholesale, Containerized API Economy That Is Coming

I was talking through the features, and roadmap for SecureDB with their team the other day, and one of the conversations that came up was around on-premise opporutnities for deploying encrypted user, data, and file APIs. In my opinion, on-premise is something that has morphed from meaning "in our data center", to where wherever we store data and run our compute power--which could be spread across multiple cloud vendors.

SecureDB offers an extremely valuable API commodity in 2015, encrypted user, data, and file stores--something EVERY website and mobile application should have as part of their back-end. When you have something this valuable, and potentnially ubiquitous, it needs to run wherever it is needed. With this in mind SecureDB is available on demand, as a retail API solution, but they will also deploy it for you "on-premise" on AWS or Heroku, with "more platforms in the roadmap".

While a wholesale, on-premise, deploy anywhere API will not be required for all APIs, as some SaaS solutions won't benefit from this type of distribution, if you are a core architectural building block, or utility APIs like SecureDB, to compete in the API economy, your API will have to deployable to anywhere it is needed. With the explosion of Docker and Kubernetes, your API should technically be able to able to be deployed in any containerized environment, with a business model, licensing, and legal framework to support the different use cases.

The ability to deploy your API on Heroku is just a gateway drug to the wholesale, white lable, containerized, API economy that is coming, where the most agile, flexible, and interoperable APIs will dominate. 


API PLUG, A New Containerized API Design, Deployment, and Documentation Solution

I started playing with the possibilities when it comes to API deployment using virtual containers like Docker early this year, exploring the constraints this approach to API deployment introduced, on my own infrastructure. The process got my gears turning, when it comes to deploying very simple data, content, and algorithmic APIs into any environment that runs Docker. 

As 2015 has progressed, I'm seeing containers emerge as part of more stops along the API life-cycle, specifically in deployment and virtualization. A new one that just came across my radar is API PLUG. I haven't played with the containerized API deployment solution yet, but after watching video, and looking through site it seems in line with what I'd expect to see from a next generation, containerized, API deployment solution.

Overlapping with my previous post on reconciling API orchestration, I'm seeing several stops along API lifecycle present in API PLUG, beyond just API deployment, it allows you to design, deploy, and document your API which moves well into the world of API management. As I'm seeing with other emerging solutions like API Studio from Apigee, I predict we will see testing, monitoring, and other critical layers of the API life-cycle emerge as orchestration layers of API PLUG.

I've added API PLUG to my API deployment research, and I am still considering the best way to include them in other areas of my API research. I'm increasingly finding that modern API management solutions are spanning multiple areas of the API life-cycle, which further encourages me to modularize my research, but also tells me I'm going to need to create some sort of Venn Diagram visualization to help me better understand the reach of emerging API products and services.

Anyways, more to come on API PLUG as I play with, and listen to the stories they tell via their blog, Twitter, and Github accounts (I can only find a Twitter account, but I'm sure they are working on that! ;-).


Algorithmic Transparency With Containers and APIs

I believe in the ability of APIs to pull back the curtain of the great OZ, that we call IT. The average business and individual technology consumer has long been asked to just believe in the magic behind the tech we use, putting the control into the hands of those who are in the know. This is something that has begun to thaw, with the introduction of the Internet, and the usage of web APIs to drive applications of all shapes and sizes. 

It isn't just that we are poking little holes into the corporate and government firewall, to drive the next generation of applications, it is also that a handful of API pioneers like Amazon, Flickr, Twitter, Twilio, and others saw the potential of making these exposed resources available to any developers. The pulling back of the curtain was done via these two acts, exposing resources using the Internet, but also inviting in 3rd parties to learn about, and tap into these resources.

Something that is present in this evolution of software development, is trust. API providers have to have a certain amount of trust that developers will respect their terms of service, and API consumers have to have a certain amount of trust that they can depend on API providers. To achieve this, there needs to be a healthy dose of transparency present, so API providers can see into what consumers are doing with their resources, and API consumers have to be able to see into the operations, and roadmap for the platform.

When transparency and trust does not exist, this is when the impact of APIs begin to break down and they become simply another tech tool. If a platform is up to no good, has ill intentions, selling vapor ware, or there is corruption behind the scenes, the API concept is just going to create problems, for both provider and consumer. How much is exposed via an API interface is up to the API designer, architect, and ultimatley the governing organization. 

There are many varying motivations behind why companies open up APIs, and the reasons they make them public or not. APIs allow companies to keep control over their data, content, and algorithmic resources, while also opening them up so "innovation" can occur, or simply be accessible by 3rd party resources to bypass the historical friction or bottleneck that is IT and developer groups. Some companies I work with are aware of this balance being struck, while many others are not aware at all, they simple are trying to make innovation happen, or provide access to resources.

As I spend some brain cycles pondering algorithmic transparency, and the recent concept of "surge pricing" used by technology providers like Uber and Gogo, I am looking to understand how APIs can help pull back the curtain that is in front of many algorithms impacting our lives. in the same way APIs have pulled back the curtains on traditional IT operations, and software development. As part of this thought exercise I'm thinking about the role Docker and other virtualized contaniners can play in providing us with more transparency in how algorithms are making decisions around us.

When I deploy one of my APIs using my microservices model, it has two distinct API layers, one for the container, and one for what runs inside of it. Docker comes ready to go with an API for all aspects of it operations--here is an Swagger definition of it. What if all algorithms came with an API by default, just like each Docker container does? We would put algorithms into containers, it would have an interface for every aspect of its operation. The API wouldn't expose the actual inner workings of the algorithm, and its calculations, but provide a complete interface for all its functionality.

How much of this API a company wished to expose, would vary just like with APIs, but companies who cared about the trust balance between them, their developers, and end-users, could offer a certain amount of transparency to build trust. The API wouldn't give away the proprietary algorithm, but would give 3rd party groups a way to test assumptions, and verify the promises made around what an alogorithm delivers, thus pulling back the curtain. With no API, we have to trust Uber, GoGo and other providers about what goes into their surge pricing. With an API, 3rd party regulators, and potentially any individual could run tests, validating what is being presented as algorithmic truth. 

I know many companies, entrepreneurs, and IT folks will dismiss this as bullshit. I'm used to that. Most of them don't follow my beliefs around the balance between the tech, business, and politics of APIs, as well as the balance between platform, developer, end-users, and what I consider to be an invetiable collison with government regulation. For now this is just a thought exercise, but it is something I will be studying, and pondering more, and as I gather more examples of algorthmic, or "surge pricing", I will work to evolve these thoughts, and solidify into something more presentable.


Where Is The Containerized API Marketplace That Businesses Will Need For Success In The API Economy?

Building on what Orlando (@orliesaurus) brings up in his story on the missing brick in the API Industry, we are missing a common place to discovery and share common API designs--something I've been working hard on with my API Stack. As I read this piece from Microsoft Azure on Container Apps now available in the Azure Marketplace, I'm reminded of how robust the app deployment, and management ecosystem has become with the containerization evolution, but when it comes to API deployment we are still way behind the times.

There are numerous containerized marketplaces of apps available out there today, what I am looking for is the same thing, but for common API patterns. After over 10 years of solid design, deployment, and management of web API patterns, we should have a wealth of API design patterns to choose from. I shouldn't have to launch my own user API, or image API, there should be the WordPress of APIs out there, ready for me to deploy.

These APIs should be available with common API patterns, well defined using Swagger and API Blueprint, available in a variety of server side languages from PHP to Go. I should be able to search, and browse these APIs, and with a single click of a button deploy into my containerized architecture of choice, from Amazon to Google, and Microsoft to the Raspberry Pi in my living room--think Heroku deploy button.

Ok, back to work on defining the common public APIs out there on my API Stack, could someone please get to work on this containerized API marketplace? I have a number of standardized API definitions I can share to get you going, and happy to focus on crafting any new ones you need, derived from common patterns I'm finding across my API Stack work.

There is a reason we have the bespoke API design, and deployment way of life--everyone is lacking the marketplace of ready-to-go, containerized APis, that individuals, businesses, institutions, and government agencies will need for success in the API economy.


DreamFactory Deploying APIs In Docker Containers Is How The API Economy Will Scale

I have been talking about virtualized API stacks for a while now, slowly evolving my own view of whats next for the world of APIs, based upon what I'm seeing happen across the sector. I am working hard to convert my own master stack of APIs, in 25+ separate container driven APIs, which I can then orchestrate into other collections for my own use, as well as the needs of my partners, and wholesale API clients. 

The post Running DreamFactory as a Docker Containers came across my monitors this weekend, which for me is a sign we are getting closer to a version of the API economy that I think will actually scale. The days where you launch one API platform, with one or many APIs, and you only sell that service via a single platform domain, have peaked. The technology, business, and politics of your API operations will have to be flexible enough to be deployed anywhere, by any partner you wish to give access, in a way that maximizes the same innovation we've seen popularize the API movement.

As John Sheehan (@johnsheehan) of Runscope says: Containers will do for APIs, what API have done for businesses over the last 15 years.

In 2015 all your APIs need to be modular, and containerized using common approaches like Docker, alowing your API infrastructure to be deploy-able anywhere, anytime. Hopefully by now you have your API management and service composition house in order, and the business of your APIs can also scale to meet this new, containerized landscape. If you are worried about discovery in all of this, you better start talking to me about APIs.json, and making sure you use a common API definition formats like Swagger or API Blueprint across all of your APIs.

Just like the early API providers like SalesForce, Amazon, and Twilio were able to out-innovate, and out-maneuver their business competitors using APIs in the last decade, companies who can containerize the approach to their API deployment and management like DreamFactory brings to the table, will realize entirely new levels of agility, and flexibility--in a time when you really, really, really need any competitive advantage you can to stay relevant.


@Broadcom, I Am Going To Need Your Switches To Support Virtualized Containers So I Can Deploy My Own APIs Too

While processing the news today over at API.Report, I came across a story about Broadcom delivering an API for managing their latest network infrastructure. The intersection of Software Defined Networking (SDN) and Application Programming Interface (API) is something I’m paying closer attention to lately. Hmmm. SDN + API = Brand New Bullshit Acronym? Meh. Onward, I just can’t slow down to care--{"packet keep moving"}.

At the networking level, I’m hearing Broadcom making a classic API infrastructure argument, "With the OpenNSL software platform, Broadcom is publishing APIs that map Broadcom's Software Development Kit (SDK) to an open north bound interface, enabling the integration of new applications and the ability to optimize switch hardware platforms.”, with examples of what you could build including "network monitoring, load balancing, service chaining, workload optimization and traffic engineering."

This new API driven approach to networking is available in the Broadcom Tomahawk and Trident II switches, looking to build up a developer community who can help deliver networking solutions, with Broadcom interested in giving, "users the freedom to control their technology, share their designs and boost application innovation.” Everything Broadcom is up to is in alignment with other valid Internet of Things (IoT) efforts I’m seeing across not just the networking arena, but almost any other physical object being connected to the Internet in 2015.

I think what Broadcom is doing, is very forward leaning effort, and providing native API support at the device level is definitely how you support “innovation” around your networking infrastructure. To keep in sync with the leading edge of the current API evolution as I'm seeing it, I would also recommend adding virtualized container support at the device level. As a developer I am thankful for the APIs that you are exposing, allowing me to develop custom solutions using your hardware, but I need you to take it one level further--I need to be able to deploy my own APIs using Docker, as well as working with your APIs, all running on your infrastructure.

I need your devices to support not just the web and mobile apps I will build around your around your hardware, and the API surface area you are providing with the new Tomahawk and Trident II switches, I need to also plugin my own microservice stack, and the microservices that vendors will be delivering to me. I need the next generation of switches to be API driven, but I also need to guarantee it is exactly the stack I need to achieve my networking objectives.

That concludes my intrusion into your road-map. I appreciate you even entertaining my ideas. I cannot share much more details on what I intend to do with your new SDN & API driven goodness, but if you work with me to let me innovate on your virtual and physical API stack—I feel that you will be surprised with what what the overall community around your hardware will deliver.


Making My 3Scale API Management Portable With A Containerized Micro-Services

As I work to find balance in this new micro-service, container driven world of APIs, 3Scale is continuing to be an essential layer to the business of my API / micro service orchestration. In alignment with what I’ve been preaching for the last 5 years, I'm needing to re-define my valuable API infrastructure resources to be as agile, flexible, and portable as I prescribe in the public stories that I tell.

Using 3Scale I have the service composition for my platform pretty well defined. The problem is, this definition is not as portable, and flexible as it could be, for the next shift in my API operations, via Docker, allowing me to manage loosely coupled stacks of micro-services. Within this new approach to API operations, I need every single Docker container I orchestrate to ping home, and understand where in my business service composition it exists and operates. This service composition allows me to offer the same service to everyone, or multitide of a services to specific or as diverse of a group of customers as I need. I don't want people to just use my APIs, I want them to use them, exactly where and how they want and need to.

I need this stack of API management infrastructure available for me within any domain, complete, or partial, on any containerized driven architecture I need--aka AWS, Google, Microsoft, Rackspace, etc.

Even if this infrastructure is just a proxy of the 3Scale infrastructure API, I need it portable, living as a proxy anywhere I need it. I also need to be able to transform (maybe APITools?) along the way. I need a moodular set of 3Scale API management infrastructure to design my own API infrastructure no matter wherever it lives in the cloud, or on-premise, even if it is in my closet on a Raspberry Pi, or wifi router for my home.

I can sign up new users to my API infrastructure, and allow them to access and consume any API resources I am serving up. I use this same infrastructure to build my applications, as well as make it accessible it to my consumers. At any point one of my consumers, needs to become a provider—I need their infrastructure to be portable, transferrable, and as flexible as it possibly can.

To accommodate the next wave of API growth I need my news API to be as flexible as possible for my needs, and I need to be able to also deploy it in any of my clients infrastructure, when I need. This enters a whole new realm of wholesale API deployment and management. I can now segment usage of my APIs beyond just my own management service composition, and through my containerized deployment infrastructure, I can add entirely new dimension to my architecture. The only thing I need is for my 3Scale infrastructure to be portable, containerized, and reusable—and since3Scae practices what it preaches, and has an API, I can move forward with my ideas.

All of this is doable for me because 3Scale has an API for their API infrastructure (*mind blown*), I just need to design the right Docker image to proxy the API(s), and extend it in as I need. As I do with all of my micro-services, I’m going to use Swagger to drive the functionality. I can take the Swagger definition for the entire stack of 3Scale API management resources, and include or omit each endpoint as I need—delivering exactly the API management stack I need

More to come on my 3Scale API management micro-service Docker image, as I roll it off the assembly line.


Are Your APIs Ready For The Coming Containerization Evolution Of The API Space?

If you were at Defrag in Colorado last November, or in Sydney, Australia for API Days last week, you’ve heard me talk about what containers are doing to APIs. There is a subtle, but important shift in how APIs are being deployed occurring right now, and as John Sheehan (@johnsheehan), the CEO of Runscope says, containers are doing for APIs, what APIs have been doing for businesses.

As I was processing news for API.Report this morning, I found more evidence of this, with the release of logging API container from Logentries. APIs have made resources like “logging”, much more modular and portable, for use in multiple channels like mobile or via websites. A containerized logging API makes the concept of a logging API much more portable, by adding an entirely new dimension for deployment. You don’t just have a logging API, you now have a logging API that can be deployed anywhere in the cloud, on-premise, or on any physical object.

This opens up a whole new world of APIs, one that goes beyond just a programmable web, quickly evolving us towards a programmable world, for the better, and the worse. Last month I asked, when will my router have docker containers by default? I received an email from a major network equipment manufacturer this week, letting me know that they are working on it. This means little containers on the Internet enabled objects across our worlds, ushering in the ability to deploy localized, wholesale APIs, giving us the ability to manifest exactly the virtual API stacks that we need, for any objective.

I try not to hype up where things are going in the API space, so I will stick with calling containers a significant evolution in how APIs are done. This new way of deploying APIs will push the evolution around the business of APIs, changing how we make generate revenue from valuable resources, while also putting even more stress on the politics of APIs, with introduction of more privacy and security concerns—not to mention adding a whole new dimension of complexity.

I’m not 100% sure where all of this is going. As with the rest of the API space, I struggle with making sense of all of this in real-time, and the allocation the mental bandwidth to see the big picture. All I can say at this moment, is to make sure you work to better understand various approaches to containerization, and adopt a microservice approach to your API design. Beyond this, all we can do is keep an eye on what companies like Logentries are doing, when it comes to containerized API deployment, and try to learn as fast as we possibly can.


API Management Infrastructure And Service Composition Is Key To Orchestration With Microservices In A Containerized World

As I work to redefine my world using microservices, I have this sudden realization how important my API management infrastructure is to all of this. Each one of my microservices are little APIs that do one thing, and do it well, relying on my API management infrastructure to know who should be accessing, and exactly how much of the resource they should have access to.

My note API shouldn’t have to know anything about my users, it is just trained to ask my API management infrastructure, if each user has proper credentials to accessing the resource, and what the service composition will allow them to do with it (aka read, write, how much, etc.) My note API does what it does best, store notes, and relies on my API management layer to do what it does best--manage access to the microservice.

This approach to API management has llowed me to deploy any number of microservices, using my API management infrastructure to compose my various service packages—this is called service composition. I employ 3Scale infrastructure for all my API / microservice management, which I use to define different service tiers like retail, wholesale, internal, and other service specific groupings. When users sign up for API access, I add them to one of the service tiers, and my API service composition layer handles the rest.

Modern API management service composition is the magic hand-waiving in my microservice orchestration, and without it, it would be much more work for me to compose using microservices in this containerized API world that is unfolding.

Disclosure: 3Scale is an API Evangelist partner.


Using Swagger As Fingerprint For My Microservice Docker Containers

I'm rebuilding my underlying architecture using microservices, and docker containers, and I'm using APIs.json for navigation and discovery within these new API stacks that I use to make my world go around. As I assign each microservice, and APIs.json file, taking inventory of the building blocks that make the service operate, I also begin including docker into the equation, and I find myself using Swagger definitions as a sort of fingerprint for my docker powered microservices.

Each microservice lives as its own Github repository, within a specific organization. I give each one its own APIs.json, indexing all the elements APIs of that specific microservice. This can include anything I feel is important, like:

  • X-Signup - Where to signup for the service.
  • X-Twitter - The twitter account associated with the service.
  • X-Samples - Where you can find client side code samples.

As long as you put X- before the property, you can put anything you want. There are only a handful of sanctioned APIs.json property types, and one of them you will find in almost all my APIs.json files generated for my platform is:

  • Swagger - A reference to a machine readable Swagger definition for each API.

Another one I’m starting to use, as I’m building out my microservice infrastructure, is:

  • X-Docker-Image - A link to a docker image that supports this API.

Each of my microservices I have a supporting Docker image that is the backend for each API. Sometimes I will have multiple Docker images for variant back-ends for the same API fingerprint. Using APIs.json I can go beyond just finding the API definition, and other supporting building blocks. I can also find the backend needed to actually deliver on the API I have defined by a specific Swagger definition. I’m just in the early stages of this work, and this series of posts reflects me trying to work much of this out.

You can browse my work on Github, most of it is public. The public elements all live in the gh-pages branch, while the private aspects live within the private master branch. It is all a living workbench for me, so expect broken things. If you have questions, or would like more access to better understand, let me know. I’m happy to consider adding you to the Github organization as collaborator so that you can see more of it in action. I will also chronicle my work here on the blog, as I have time, and have semi-interesting things to share.


Using APIs.json For My Microservice Navigation And Discovery

I’m rebuilding my underlying architecture using microservices and docker containers, and the glue I’m using to bind it all together is APIs.json. I’m not just using APIs.son to deliver on discoverability for all of my services, I am also using it to navigate around my stack. Right now I only have about 10 microservices running, but I have a plan to add almost 50 in total by the time I’m done with this latest sprint.

Each microservice lives as its own Github repository, within a specific organization. I give each one its own APIs.json, indexing all the elements APIs of that specific microservice. APIs.json has two main collections, "apis" and "include". For each microservice APIs.json, I list all the properties for that API, but I use the include element to document the urls of other microservice APIs.json in the collection.

All the Github repositories for this microservice stack lives within a single Github organization, which I give a "master" repo, which acts as a single landing page for the entire stack. It has its own APIs.json file, but rather than having any API collections, it just uses includes, referencing the APIs.json for each microservice in the stack.

APIs.json acts as an index for each microservice, but through the include collection it also provides links to other related microservices within its own stack, which I use to navigate, in a circular way between all supporting services. All of that sounds very dizzying to write out, and I’m sure you are like WTF? You can browse my work on Github, some of it is public, but much of it you have to have oAuth access to see. The public elements all live in the gh-pages branch, while the private aspects live within the private master branch.

This is all a living workbench for me, so expect broken things. If you have questions, or would like more access to better understand, let me know. I’m happy to consider adding you to the Github organization as collaborator so that you can see more of it in action. I will also chronicle my work here on the blog, as I have time, and have anything interesting things to share.


When Will My Router Have Docker Containers By Default?

This is something I’m working on building manually, but when will the wireless router for my home or business have Docker container support by default? I want to be able to deploy applications, and APIs either publicly or privately right on my own doorway to the Internet.

This would take more work than just adding storage, compute, and Docker support at the router level. To enable this there would have to be changes at the network level, and is something I’m not sure telco and cable providers are willing to support. I’ll be researching this as a concept over the next couple months, so if you know of any read-to-go solutions, let me know.

It seems like enabling a local layer for docker deployment would make sense, and help move us towards a more programmable web, where notifications, messaging, storage, and other common elements of our digital lives can live locally. It seems like it would be a valuable aggregation point as the Internet of Thing heats up.

I could envision webhooks management, and other Internet of Things for household automation living in this local, containerized, layer of our online worlds. This is just a thought. I’ve done no research to flush this idea out, which is why its here on kinlane.com. If you know of anything feel free to send information my way.


Using Containers To Bridge What Swagger Cannot Define On The Server-Side For My APIs

When I discuss what is possible when it comes to generating both server and client side code using machine readable API definitions like Swagger, I almost always get push-back, making sure I understand there are limitations of what can be auto-generated.

Machine readable API definitions like Swagger provide a great set of rules for describing the surface area of an API, but is often missing many other elements necessary to define what server side code should actually be doing. The closest I’ve gotten to fully generating server side APIs, is when it came to very CRUD-based APIs, that possess a very simple data models--beyond that it is difficult to make "ends meet".

Personally, I do not think my Swagger specs should contain everything needed to define server implementations, this would make for a very bloated, and unusable Swagger specification. For me, Swagger is my central truth, that I use in generating server side skeletons, client side samples and libraries, and to define testing, monitoring, and interactive documentation for developers.

Working to push this approach forward, I’m also now using Swagger as a central truth for my Docker containers, allowing me to use virtualization as a bridge between what my Swagger definitions cannot define for each micro-service or API deployed within a specific container. This approach is leaving Swagger as purely a definition of the micro-service surface area, and leaving my Docker image to deliver on the back-end vision of this surface area.

These Docker images are using Swagger as a definition for its container surface area, but also as a fingerprint for the value it delivers. As an example, I have a notes API definition in Swagger, for which I have two separate Docker images that support this interface. Each Docker image knows it only serves this particular Swagger specification, using it as a kind of fingerprint or truth for its own existence, but ultimately each Docker image will be delivering its own back-end experience derived from this notes API spec.

I’m just getting going with this new approach to using Swagger as a central truth for each Docker container, but as I’m rolling out each of my API templates with this approach, I will blog more about it here. Eventually I am hoping to standardize my thoughts on using machine readable API definition formats like Swagger to guide the API functionality I'm delivering in virtualized containers.


Use APIs.json To Organize My Swagger Defined APIs Running In Docker Containers

I continuing to evolve my use of Swagger as a kind of central truth in my API design, deployment, and management lifecycle. This time I’m using it as a fingerprint for defining how APIs or micro-services that run in Docker containers function. Along the way I’m also using APIs.json as a framework for organizing these Swagger driven containers, into coherent API stacks or collections, that work together to accomplish a specific objective.

For the project I’m currently working on, I’m deploying multiple Swagger defined APIs, each as separate Docker containers, providing some essential components I need to operate API Evangelist, like blog, links, images, and notes. Each of these components have its own Swagger definition, and corresponding Docker image, and specific instance of that Docker image deployed within a container.

I’m using APIs.json to daisy chain all of these APIs or micro-services together. I have about 15 APIs deployed, each are standalone services, with their own APIs.json, and supporting Swagger definition, but the overall project has a centralized APIs.json, which using the includes properties, provides linkage to each individual micro-services APIs.json--loosely coupling all of these container driven micro-services under single umbrella.

At this point, I’m just using APIs.json as a harness to define, and bring together the technical aspects of my new API stack. As I pull together other business elements like on-boarding, documentation, and code samples, I will add these as properties to my parent APIs.json. My goal is to demonstrate the value APIs.json brings to the table when it comes to API and micro-service discovery, specifically when you deploy distributed API stacks or collections using Docker container deployed micro-services.


Project Idea: Codenvy-Like Containerized Spreadsheets

I wrote a story about a company I’m advising for last week called Codenvy, who is delivering modular, scalable, cloud development environments using their web IDE and Docker. I'm currently working my way through the spreadsheet to API, and API to spreadsheet solutions I track on, and it is making me think that someone should deliver a Codenvy-like containerized spreadsheet environment.

With this type of environment you could forever end the emailing of spreadsheets, allowing people to craft spreadsheets, complete with all the data and visualization goodness they desire, and clone, fork, share, and collaborate around these containerized spreadsheets. You could have a library of master planned spreadsheets that your company depends on, and manage user permissions, analytics, and even scalability, performance, and distribution of vital spreadsheets.

Using this platform you could easily spawn spreadsheet driven APis using services like API Spark, and you could come full circle and have spreadsheets that obtain their data directly from valuable API driven resources, to really blow your mind. Personally I'm not a big spreadsheet lover, but I do recognize that it is where a majority of the worlds data resides, and the user interface of choice for many business folk--making it a good candidate for the next generation of containerized API data sources, and clients.

Just another one of my random ideas that accumulate in my Evernote, that I'll never have time to build myself, and hoping you will tackle for me. As I review all of the existing spreadsheet APi solutions available currently, I’m sure I'll have more API centric, spreadsheet ideas bubble over t o share with you.


Now Our Development Environment Is Now Containerized And Scalable Like Our Production Environment

The production environment for delivering web and mobile applications has radically evolved in the last couple years, becoming more distributed, scalable, virtualized, and containerized. APIs, and new development frameworks are providing a smorgasbord of resources for developers to put to work, and devops is putting more power and control, throughout the development life cycle, directly into a developers hands. It is time that a developers environment evolves to keep pace with the environment developers are building applications for.

I’ve been talking about containerization in the API space for a while now, something that is still a very manual process, so I have been looking out for a potential development environment that matches my vision for orchestration with APIs for some time now. This is why I was stoked when I ran into Tyler Jewell (@TylerJewell) back in September, who introduced me to Codenvy—"a new SaaS developer environment that allows developers to create hosted environments optimized for creating, editing, compiling, testing and debugging applications authored in different programming languages”.

At first glance Codenvy appears to be just another web-based IDE, with autocomplete, data source management, and Github integration. As you play with more you begin to realize you can configure and share projects, allowing you to setup environments, share and even clone them. Then you realize the whole IDE is configurable, clone-able, shareable, and scalable using the ever popular Docker, complete with API or CLI access to all work spaces, projects and files. This is the environment I was looking for—this is the orchestration I envision for building applications using APIs.

My head is swimming with ideas of how I can orchestrate application development environments now that both the development and production environments are containerized, scalable, bite-size goodness, complete with a Github layer in the middle. While many users of Codenvy will be using to manage their own devop workflow, I will be pursuing it to manage my own infrastructure, but more importantly using it as a platform to craft custom API stacks, share and publish publicly, and explore more opportunities around being an API broker.

Disclosure: I am an adviser to Codenvy.


Relationship Between APIs And Containers

Containers are a fast growing trend when it comes to delivering compute resources online. Reflecting the world of API design, I feel containers are about deploying exactly the cloud resources you will need to complete a specific compute objective, then putting everything into a single virtualized container definition, that can be forked, clone, scaled, and evolved indpendently of any other resources. Each individual container posesses everything it will need from the operating system, file system, database, necessary libraries, and APIs to accomplish its given objective.

Virtual containers will make API driven resources even more agile, nimble and scalable--not just for providers, but also for consumers. Everything we have right now I would consider to be a retail API market—where even though APIs are often sold B2B, they are generally a one-size fits all API solution. Think about when API resources become modular, customizable, scalable, wholesale resources that can be bought and sold, and deployed anywhere, making all digital assets into commodities that are ready for use anywhere in the API economy.

My goal is to understand how APIs and containers work together, and as I do with all of my research I setup a Github repo for tracking every piece of container news I read that I feel is relevant, each company doing interesting things containers, tools for working with containers, and ultimately my analysis. Cloud computing will continue to be the lead driver of innovation in the API space, and I feel the introduction of cloud computing in 2006–2008 gave legitimacy to web APIs—you could now deploy global infrastructure using web APIs, game on. I think containerization will take API design, deployment, management and integration to the next phase, and closer the early visions of a programmable web we all started envisioning around 2005.

I will be taking each of my API designs, and developing an Amazon Machine Image, Heroku App version, as well as a docker container definition. I want every one of my open source API designs to have the option to be deployed in any cloud environment, and preferable with the click of a button, as with Heroku. In the future it won’t about knowing about all the APIs, it will be about having the right stack of APIs, that you can deploy anywhere, anytime.

I will be talking on this subject at @APIStrat in Chicago this September, and also at Defrag in Colorado this November, let me know if you want to connect around the subject.


Push Button API Deployment With The Heroku Button

The new Heroku Button gets us one step closer to a new age of API deployment, where anyone can deploy the APIs they need without any developer or IT resources. As I’m working on packaging up API designs for my screen capture API, and image manipulation API, this type of approach is what I’m envisioning for all of my APIs in the future—push button API deployment.

You shouldn't have to wait to deploy the API you need. Just as we are beginning to deploy pre-packaged application stacks like Wordpress, and Drupal, we should be able to deploy common API deployments for images, blogs, videos, and much, much more, with a single click of a button. Once any new API is launched it can be configured, and connected to other systems using the API, allowing it to operate as part of larger stack, or stay as completely independent node  that just checks in with the mother ship from time to time.

While there will remain a handful of API leaders like Twilio and SendGrid who will have a big presence, many of the APIs in this next wave of API deployment will be smaller, and more transient in nature, taking advantage of current cloud trends around PaaS, and containerization. This new type of APis will possess a self contained blueprint for the OS, database, server-side code, API definition, and even configuration, integration, and automation tooling for the API.

I’m working on Docker definitions for creen capture andimage manipulation AP, as well as other APIs I develop in the future, but first I think any API I deisn shouldl run as a Heroku app, that anyone can deploy in their own account with a single click of a button. It won't take much work to make this happen for any API I deploy, and since 3Scale API infrastructure, which I use to secure my APIs, already runs on Heroku--making securing, and managing my Heroku deployed APis seamless.

Disclosure: 3Scale is an API Evangelist partner.


Everyone Is About To Get An API With The New Wordpress API

While at API Craft in Detroit this week I had the pleasure of hanging with two leads on the WordPress(org) development team, and discuss the API strategy for the blogging platform. Andrew Nacin (@nacin), Lead Developer, and Ryan McCue (@rmccue), WordPress Plugin Developer, facilitated an open circle discussion to work through the challenges that WordPress is facing when developing an API for the open source blogging platform.

At face value, I know a number of API developers who will be less than pleased when they hear about a WordPress API, as both PHP, and WordPress are easy targets for developer’s hatred, for generating less than perfect code. ;-) But, in the end you can’t ignore some of the stats on WordPress usage:

These stats don’t even touch on the number of WordPress plugins and mobile solutions there are developed on the insanely popular website, and blogging platform. I couldn't think of a better platform to start rolling APIs out APIs to the masses, and quickly educating a very large group of tech savvy folks along the way.

Many of these WordPress sites are already integrated with popular API platforms like Twitter, Facebook, Google, and Amazon, so it makes sense to educate WordPress administrators on the APIs they are already using, as well as introduce them to the concept of having their own API around the potentially valuable content they generate.

We Are Talking About Developing 60 M+ Distributed APIs
There is no API deployment or management service provider that is even close to numbers of this magnitude. When ready, the WordPress API will give millions of websites, and the individuals, companies, organizations and government agencies behind them, an Application Programming Interface (API). (weird, I haven't spelled that out in a while) What does this mean for the API sector? I do not know yet, but to me it definitely resembles the long tail of API deployment, that I, and 3Scale have been advocating for the last four years. 

Building For The Lowest Common Denominator
One dominant theme in the conversation around the WordPress API at API Craft, was about building an API for the lowest common denominator. WordPress is an open source blogging platform built using PHP and MySQL, something that enables it to be installed on just about any hosting platform, which has contributed to the platform's growth. However, many of these platforms restrict common aspects of the Internet like the ability to use all of your HTTP verbs, like PUT or DELETE, or simply do not offer essentials like SSL, which is needed for oAuth. The bar for developing an API that will be used by 60 M+ providers has to be set pretty low, ensuring it will work across the entire ecosystem—something WordPress is already pretty damn good at.

Ensuring That The WordPress API Is Extensible
One of the hallmarks of WordPress is the ability to create custom themes and plugins, that extend the functionality of the blogging platform, and the API will be no different. WordPress is much more than just a collection of pages, posts, tags, and links. WordPress has been used as the core of just about any time of website, serving up books, photos, videos, maps, and any other content type you can dream up. The WordPress API has to be able to provide a common interface that can be used to create, read, update, and delete core WP content types, like pages and posts, as well as be extended to any other interpretation the WordPress community can dream up—adding another dimension to the already massive scope of delivering an API to 60M+ websites.

Single Place To Educate Everyone About APIs
I can't think of another platform that has introduced the average person to web programming, more than WordPress (maybe Drupal?). When you go back to the concept of lowest common denominator, WordPress employs PHP and MysQL to drive its functionality, and combined with the plugin and theme extensibility, the platform has provided a rich gateway to the world of web, and mobile app development. The opportunities to introduce, and educate the masses of the benefits (and downside) of APIs, is huge! While the already established API developer community may not want the WordPress community developing APIs, it will be the quickest way we from 10K APIs, into the millions of APIs. WordPress drives 22.0% of the top 10 million websites, which will instantly provide access to some potentially high value content and data, that will significantly increase the size of the overall API economy, as well as contribute to API literacy in every business sector, around the globe.

At the time of this post, the WordPress API is only a plugin, which means it is an optional part of the platform, requiring a WordPress site administrator to know about the plugin, and install as part of their individual WordPress installation. However, the goal is to bake the API into WordPress as part of the 4.1 release, which could happen as soon as this fall, or at the latest the first or second quarter of 2015. While not all 60 M+ WordPress sites will immediately update to the 4.1 release, a significant portion of websites around the world will instantly have an API--good or bad.

I’m pretty excited at the idea of so many websites having APIs, then at the same time I'm completely terrified. It is kind of like flooding the market, with some very high value APIs, while also adding millions of very shitty APIs, and I'm sure a lot in the middle. With a lot of the problems we are facing around discovery and integration at 10K, imagine what things will look like with millions of APIs all of a sudden. Do we want every website to have an API this fast?

Beyond just giving a website an API, I'm also very interested in the potential of using WordPress API as purely an API core, allowing it to be used to drive content and data for mobile, and single page applications (SPA). Considering the innovation we've seen around the core WordPress platform, from the community, I imagine we'll see similar when it comes to just raw API deployment, and when you start considering the potential when bundled with the latest containerization movement, led by Docker.io, and being driven by Amazon, Google, Microsoft, and Red Hat—your head will start to spin.

WordPress API + Virtual Containers = API Deployment As Part Of The Fabric Of The Cloud

I will be carving out more time to consider the implications of the introduction of the WordPress API, and hopefully provide more feedback to the WordPress API development team. If you are a seasoned WordPress developer, and experienced API consumer, you might consider getting involved too. You can get access to the source code for WP API over at Github, and you can submit issues via the issue management for the project—they also provide updates via the WP blog tagged as json-api.


If you think there is a link I should have listed here feel free to tweet it at me, or submit as a Github issue. Even though I do this full time, I'm still a one person show, and I miss quite a bit, and depend on my network to help me know what is going on.