An update on container support on Google Cloud Platform

135 points by proppy 11 years ago | 33 comments
  • jbeda 11 years ago
    We are particularly excited about Kubernetes. We are taking the ideas for how we do cluster management at Google and creating an open source project to manage Docker containers.

    https://github.com/GoogleCloudPlatform/kubernetes

  • michaelmior 11 years ago
    "Everything at Google, from Search to Gmail, is packaged and run in a Linux container." Was this something which Google had previously disclosed? Seems a bit surprising to me.
    • jbeda 11 years ago
      Yeah -- I talked about it a couple of weeks ago at GlueCon. Also shared that we launch over 2 billion containers every week.

      My slides from the talk: https://speakerdeck.com/jbeda/containers-at-scale PDF: http://slides.eightypercent.net/GlueCon%202014%20-%20Contain...

      • zmanian 11 years ago
        I went to a talk on Omega on Box.com about a year ago. At that point Omega was still about managing Google's giant statically linked binaries.

        Did Google switch to containers in a year? Maybe that answer is in your slides? If so crazy...

        • zeroxfe 11 years ago
          The statically linked binaries run inside the containers. Static linking gives you a certain kind of portability (no need for library dependencies on the machine.) The containers give you isolation, resource management, etc.
          • menage 11 years ago
            Google had been using primitive kernel containers (based on cpusets and fake NUMA emulation) since early 2007 - this was quite some time prior to getting cgroups into the mainline kernel.
          • michaelmior 11 years ago
            Thanks for the link!
          • SEJeff 11 years ago
            I'm guessing you weren't aware that google is the company that wrote the overwhelming majority of the cgroup subsystem and much of the namespace bits that lxc/docker use. Paul Menage and Rohit something were the two biggest kernel guys on it if memory serves. I used to read the LKML firehose actively and gave up eventually.
            • menage 11 years ago
              Yes, we contributed the core cgroups system, and a good chunk of the memory and I/O cgroup subsystems. Google didn't have much need for namespacing though - when you have complete control over all the code that's running, it's possible to have the common libraries co-ordinating with the cluster management system to assign things like network ports and userids, so there's no need to virtualize IP addresses or userids. Google's containers are (or at least were - I left a few years ago) just for resource isolation.
              • michaelmior 11 years ago
                Thanks for the info Jeff! No, I wasn't aware of this.
              • thockingoog 11 years ago
                We've been talking about containers for years. I presented some of our problems with them back in 2011, and I know we've been talking about it before that. :)
              • planckscnst 11 years ago
                This post was written by Eric Brewer, the author of CAP theorem.
                • mp99e99 11 years ago
                  Is that the same Eric Brewer from Inktomi?
                • seiji 11 years ago
                  I imagine he's done some noteworthy things since? That's like introducing Elon Musk as "Founder of X.com."

                  Essentially it comes across as "I'm a fan of your early work, but nothing you've done since matters."

                  • planckscnst 11 years ago
                    He's done a bunch of stuff, but CAP is probably his most popular work. Even Wikipedia says that's what he's "known for".
                • PanMan 11 years ago
                  I know Google runs at a huge scale, but isn't even for them 2 billion containers / week a LOT? I assume a lot of these only run for a really short time? Are containers the new scripts?
                  • jbeda 11 years ago
                    I can't give specifics but a lot of these are short lived. For example, if you launch a MapReduce it'll typically launch containers for each of the workers and then take them down when the MR is done.

                    This also doesn't speak to the number of long running containers. There are plenty that don't stop/start during the week I grabbed that number.

                    • derefr 11 years ago
                      If docker images are fancy static binaries, then docker containers are fancy OS processes. Going through two billion OS PIDs in a week doesn't seem that hard.
                    • cmelbye 11 years ago
                      > Are containers the new scripts?

                      They certainly seem to work well for that. Heroku, for example, uses containers for not just persistent processes (application servers, workers) but also short-lived processes. Tasks that run on a schedule (hourly, daily, etc.) are run by, you guessed it, starting up a container running a processes which exits when it's finished. One-off commands like maintenance scripts or REPLs work the same way.

                    • 11 years ago