-
Notifications
You must be signed in to change notification settings - Fork 9
Systems Interfaces
https://plus.google.com/hangouts/_/66d64f30e1f51869afa07fa8b589b648bd481a7a
Matt Nagi ...
- Good Morning!
- Noah sharing about himself. Works for Heroku.
- "I'm very interested in APIs. I'm a systems guy."
- Lots of linux, and works on the application server
- Layers of APIS going deep down into the system
- There are lots of questions about how we should use APIS, but there are a lot of APIs established in the 60s that we use, and build on top of. They are just mostly operating system APIs.
- Let's try to get web interfaces to that stage.
Nick is showing some slides concerning systems and APIs.
User space is a metaphor for APIs in general. You can create an account and start using the userspace API. For example, the userspace for twilio is sending text messages.
Matt from quicken loans speaking. "We have a mobile app. The users are anyone who is going through the process of getting a mortgage."
Jeremy, "works on a product that does image sizing". Most of the front-end is designer people.
Very different domains. There is some system that has control, and that imposes a lot of constraints on the user. But you still give the user a space to do something.
It's this thing you build for people to play in but you can't get outside of it.
The point is to make it easy for your users. You hide the complexity for the user. Make it a fun sandbox.
Nesting doll of system interfaces. Let's talk about that.
Giving example of the Sinatra app. Language virtual machine.
MRI Ruby VM - bytecode. Language VM is pretty well established. Something takes your code, parse it, and converts it into a set of valid bytes/tokens.
The next layer in heroku. Heroku LXC container - exec. LXC is some of the most interesting technologies right now.
(docker.io was also mentioned)
LXC stands for Linux containerization.
It's isolation of process tables, isolation of security purposes, and more.
Dotcloud is a platform as a service.
Within the next few years, everything will probably be running in containers. Really strong interfaces for isolating. Daemon can't go crazy and use all the resources.
LXCs are really really fast.
Off that tangent. So in heroku every single app is in one of these LXC containers.
All this is running on ubuntu. Ubuntu is providing a file system for what your userspace could use.
ubuntu does release management really well.
as API developers we should look and figure out how this distributed company manages to build and release this operating system, give it to users and never break anything.
The operating system maintainers have figured this out already.
It's small tools that you can put together. Narrow but powerful interfaces. And let users put those together how they see fit.
Decoupled
Combinable
One way to link to libraries
Dependencies
Thinking about the operating system influences.
How do we approach that with our own APIs?
Many endpoints? One way to link to libraries. Resources + Links
Secret Admirer. Prices can attach a link to a line item.
Encapsulation, where you are isolating.
RBAC
At heroku we use Amazon Web Services
Xen - virtualization. Hypervisor. You are inside this userspace doing low level authorization - this process can get direct access to a CPU.
Amazon Linux - somewhere there is a data center and they give you an API to start/stop their machines.
Paravirtual API. A CPU exposes it. Xen, or VMware uses it. It's a way for things down in the stack to get access to the CPU. Software virtualization and APIs it exposes are still important for software work.
We want some alignment with how the APIs work under the hood. Let's try and align them to use the libraries that are powering them under the hood.
Talked briefly about Erlang, and also about Node.
CPU - one of the last levels. There is some piece of silicon inside with the x86 instruction set with crazy extensions to it for virtualization.
The disruptor ring - example of knowing what is actually happening under the hood helps you improve speed.
You have to in some cases know more than what the interface tells you to really do things very well.
We are lucky we don't have to worry about all these interfaces. The complexity of some of the code under some of these interfaces is so complicated that you can't know it. The interface must be good.
It can be hard to line these things up. We need to get as crazy aligned as these hardware developers. Things generally don't break for them. Our APIs break a lot.
Every level is hiding some degree of ugliness. That's why we need abstractions. It hides complexity.
Physics - the interface to rule them all.
Example: Transistors - it's a physical thing, layered the right way with electrons and does some sort of gate.
Responsibilities are clear on the systems' stuff, but not clear on the API stuff. There is no common 'open source' API to rule them all. There is a point where you don't want to share anymore because it hurts you against your competition.
Should we have some sort of standardization on these APIs? There are good engineering practices that we aren't using in our software space.
You have the public facing APIs of your own, but you also have a lot of 3rd parties. Thinking about the 3rd parties would help come up with some sort of standardization. Setting the standard.
Layers of abstractions make it black box to black box to black box. The abstractions could always change too.
Deep linking is a way to find what you need.
All virtualization is not user friendly but it is operator friendly.
Security: You have to have a boundary.
Expressiveness. That's how we get things done. Things need to eventually be expressive. That's the tradeoff for all these abstractions.
We go into more group discussion now:
We are talking about adding endpoints - evolving those over time. Evolving internal and public.
We are thinking that discovery and documentation of those APIs is important. Some sort of directory service of your APIs.
Service discovery in hypermedia might have some identifier. Maybe just linking.
Maturity vs Discoverability. Noah, brought up how in linux though things don't change all the time. These internal APIs don't use a hypermedia approach. It's not discoverability. It's maturity.
Large enough audience then you need to keep your API consistent. It shouldn't change.
The lower you go on the stack the more well defined the interfaces are.
The difference between going level is that the semantics have been defined and accepted.
How about package managers? It is a way to manage versioning and keep things reliable.
Tradeoff for package managers is the isolation route.
"You have to be able to innovate". Microsoft never deprecates anything and then they can't innovate as fast because they are stuck with old stuff.
In the server level you are empowered to go back and use an old version, but on the API level you really can't do that.
Versioning is at the heart of this all. You need a way to deprecate.
Containers also.
The metaphor of container is really important and really apt. Shipping containers changed shipping.
Example of hardware APIs breaking Hardware for a while was always about making the transistors smaller. Then it went to more cores on the machine. Now as software developers we have a whole new job managing those multiple cores.
What does this mean for how we are developing stuff nowadays?
You need a way to deprecate things - just like systems do. Because if you can deprecate then you can move forward and innovate.
But you also need an ability to stay on something old - just like systems do. This allows people to continue working with old things.
Versioning can solve that. This is how systems solve it. With versioning generally.
And you move people onto the newer systems by giving them the new hotness. This is what ubuntu does as well. 10.04 is supported for like 5 years, it's stable, but you won't get some of the stuff in 12.x.