NetworkWideUpdates

Revision 20 as of 2005-10-27 22:00:32

Clear message

NetworkWideUpdates

Status

Introduction

Network Wide Updates provide a framework that allows systems to have a central repository to get their software updates and new packages from.

Rationale

Network Wide Updates enable a framework that allows many systems on a network to get updated software packages from a central repository. Some thoughts behind it:

  • Not only saves bandwidth, but in an enterprise setting, all machines are kept up-to-date
  • apt-cacher solution ?

Scope and Use Cases

  • Need to be able to push updates to a (large) group of machines in one go, so we don't need to touch each one
  • Might want to install arbitrary local packages / archives on all machines
  • Need to conserve bandwidth in large environments (proxy/cache)

Implementation Plan

  • Big red button to deploy the updates
    • Package auto-pkg-update (depends on ssh-server) that creates a sudo user that can only run apt-get install" and setup a ssh configuration
      • instead of sudo we may write a small sudo application that will wrap the calls to apt-get (so that the sudo file does not too complicated and we can make sure that no options are passed to apt that makes it read a different configuration/sources.list so that a attacker can not install any package it wants)
      • Tricky: generating the package as it needs a ssh key, solution: have auto-pkg-update-source that will generate the binary package.
      • the package should setup/modify the sources.list of the clients too

      • add a note about the recommended proxy/cache
      • push individual packages
  • Proxy for the packages
    • Investigate the various apt-proxy programs, we should probably only make recommendations to the user, rather than automate this, since we wouldn't necessarily know where in their network they want a proxy and they also don't need a proxy for the other features to work (we probably don't want to encourage a mirror/partial mirror tool for bandwidth reasons):

      • apt-proxy

      • apt-cacher

      • squid (squid is reported to be a good candidate if the object size is set to a bigger size)

  • Tools for the creation of local repositories should be integrated. A a single button creates a Packages file and a Release file and signs it afterwards. It works on a per-directory basis. Repositories created like this need to integrate with the auto-pkg-update package and update the sources.list of all the clients).

  • the working title for it is "apu-{server,clinet}"
  • a outstanding issue is how to deal with packages that ask
    • questions (debconf/shell prompts)

Data Preservation and Migration

The package updates should be tested on a machine before the actual netwide deploy. In the general case, a rollback of a update is not possible because the {pre,post}inst scripts of the package works in general only in the upgrade direction not in the downgrade direction (when e.g. a database file is converted to a new format during a upgrade, a downgrade will result in a unreadable database file for the old version).

Packages Affected

One of the apt-cacher/apt-proxy packages is likely to be used for storing the packages on the server. ssh-server is needed on the clients to make it possible for the server to connect to the clients. The push should happen with a command line application. We may think about writing a front-end in python-gtk for it.

User Interface Requirements

Since we target network administrators, a command line UI should be sufficient. A optional pygtk interface may be useful. Additionally a webmin (html) kind of UI may be useful, but that opens some issue with security and should be targeted later.

Outstanding Issues

A prototype implementation was done in the michael.vogt@ubuntu.com--2005/auto-pkg-update--main--0 repository at http://people.ubuntu.com/~mvo/arch/ubuntu

From the HOWTO:

1. If you don't have a proxy solution already (or a
   mirror/partial-mirror),
   install one. If you have no preference, use apt-proxy.
   The combination of squid+apache2 seems to work well too.
   Squid needs a larger maximum_object_size than the default,
   64MB should be good and a bigger maximum cache size (cache_dir 
   from 100 to e.g. 2000).
2. Install apu-server on the machine that will act as the server.
   This will setup ssh and gpg keys, a example sources.list and
   50apu (if squid is installed) apt-config file in /etc/apu-server
3. Customize your /etc/apu-server/sources.list (this will go
   to the clients) and /etc/apu-server/50apu (if you use a generic
   http proxy like squid)
4. generate a "apu-data" package with the following command:
   "sudo gen-apu-data-pkg". The package will be placed in
   /var/lib/apu-data/repo/local
5. Install the apu-client (from the normal repository) and the newly
   generated "apu-data" debian packages on the clients (if you have
   apache2 installed on the server you can just download it from
   http://$server/local)
6. Add each host to a file in /etc/apu-server/clients/
   Each file defines a class of machines.
7. Run sudo apt-all $class $operation
   E.g.:
   * "sudo apt-all default update" -
     run apt-get update on all machines in the "default" class
   * "sudo apt-all net1 dist-upgrade" -
      run apt-get dist-upgrade on all machines in the "net1" class
   * "sudo apt-all 192.168.1.192 update" -
     update the single machine 192.168.1.192


CategoryUdu CategorySpec