Feed aggregator

Certified Ubuntu Cloud Guest – The best of Ubuntu on the best clouds

Ubuntu | canonical.com - Thu, 06/22/2017 - 22:03

Ubuntu has a long history in the cloud. It is the number one guest operating system on AWS, Azure and Google Cloud Platform. In fact there are more Ubuntu images running in the public cloud than all other operating systems combined.

Ubuntu is a free operating system which means anyone can download an image, whenever they want. So why should cloud providers offer certified Ubuntu images to their customers?

This eBook explains why certified Ubuntu images are essential for organisations and individuals that require the highest level of security and reliability.

Download this eBook to learn:

  • How cloud providers differentiate themselves from their competitors by offering customers certified Ubuntu images
  • How to make sure your cloud provider is using certified Ubuntu images
Submit your details to download the eBook:

MktoForms2.loadForm("//pages.ubuntu.com", "066-EOV-335", 2134);

 

Kernel Team Summary: June 22, 2017

Ubuntu | canonical.com - Thu, 06/22/2017 - 02:37

This newsletter is to provide a status update from the Ubuntu Kernel Team. There will also be highlights provided for any interesting subjects the team may be working on.

If you would like to reach the kernel team, you can find us at the #ubuntu-kernel channel on FreeNode. Alternatively, you can mail the Ubuntu Kernel Team mailing list at: [email protected]

Highlights
  • FWTS 17.06.00 released: https://wiki.ubuntu.com/FirmwareTestSuite/ReleaseNotes/17.06.00
  • Released stress-ng 0.08.05, new Real Time cyclic stressor and Real Time scheduling softlockup stressor.
  • Prepare 4.4.73 (Xenial)
  • Update artful/4.11 to v4.11.6
  • The embargo for CVE-2017-1000364 [1] has expired and the fix was
    released for the following packages in the updates and security pockets:
    • * Trusty
    • – linux 3.13.0-121.170
    • – linux-lts-xenial 4.4.0-81.104~14.04.1
    • * Xenial
    • – linux 4.4.0-81.104
    • – linux-aws 4.4.0-1020.29
    • – linux-gke 4.4.0-1016.16
    • – linux-raspi2 4.4.0-1059.67
    • – linux-snapdragon 4.4.0-1061.66
    • – linux-hwe 4.8.0-56.61~16.04.1
    • – linux-hwe-edge 4.10.0-24.28~16.04.1
    • – linux-joule 4.4.0-1003.8
    • * Yakkety
    • – linux 4.8.0-56.61
    • – linux-raspi2 4.8.0-1040.44
    • * Zesty
    • – linux 4.10.0-24.28
    • – linux-raspi2 4.10.0-1008.11

    Due to that, the proposed updates for the above packages being prepared
    on the current SRU cycle are being re-spun to include the fix.

    [1] CVE description: It was discovered that the stack guard page for
    processes in the Linux kernel was not sufficiently large enough to
    prevent overlapping with the heap. An attacker could leverage this with
    another vulnerability to execute arbitrary code and gain administrative
    privileges.

Devel Kernel Announcements

We intend to target a 4.13 kernel for the Ubuntu 17.10 release. The Ubuntu 17.10 Kernel Freeze is Thurs Oct 5, 2017.

Stable Kernel Announcements Current cycle: 02-Jun through 24-Jun
  • 02-Jun Last day for kernel commits for this cycle
  • 05-Jun – 10-Jun Kernel prep week.
  • 11-Jun – 23-Jun Bug verification & Regression testing.
  • 26-Jun Release to -updates.
Next cycle: 23-Jun through 15-Jul
  • 23-Jun Last day for kernel commits for this cycle
  • 26-Jun – 01-Jul Kernel prep week.
  • 02-Jul – 14-Jul Bug verification & Regression testing..
  • 17-Jul Release to -updates.
Status: CVE’s

The current CVE status can be reviewed at the following:
http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html

Drupal Architect/SME position is open

Drupal | Job Search - Tue, 06/20/2017 - 23:35
New Carrollton, MD, United States

OpenStack and Containers live Q&A session

Ubuntu | canonical.com - Tue, 06/20/2017 - 23:20

Join us for a 1 hour online session with a cloud expert

OpenStack and Containers Office Hours are online Q&A sessions held on an ongoing basis. Their aim is to help community members and customers deploy, manage and scale their Ubuntu-based cloud infrastructure.

What’s covered?

These interactive online sessions are hosted by an expert from our Cloud Team who will:

  • Outline how to leverage the latest features of Ubuntu OpenStack, LXD, MAAS, Kubernetes and Juju
  • Answer questions on OpenStack and containers technology
Who should attend?

These sessions are ideal for IT Pros, DevOps and SysAdmins wanting a relaxed, informal environment to discuss their experiences using Ubuntu Cloud technology.

Such sessions are normally attended by a small group, making them ideal for networking with other OpenStack and scale-out cloud enthusiasts.

Why join?

Get the chance to ask any questions about our software and support services.

These sessions are attended by a small group, making them ideal for networking with other OpenStack and scale-out cloud enthusiasts.

Upcoming sessions Book your place

MktoForms2.loadForm("//pages.ubuntu.com", "066-EOV-335", 1734);

MAAS Development Summary: June 12th – 16th

Ubuntu | canonical.com - Tue, 06/20/2017 - 01:30

The purpose of this update is to keep our community engaged and informed about the work the team is doing. We’ll cover important announcements, work-in-progress for the next release of MAAS and bugs fixes in release MAAS versions.

MAAS Sprint

The Canonical MAAS team sprinted at Canonical’s London offices this week. The purpose was to review the previous development cycle & release (MAAS 2.2), as well as discuss and finalize the plans and goals for the next development release cycle (MAAS 2.3).

MAAS 2.3 (current development release)

The team has been working on the following features and improvements:

  • New Feature – support for ‘upstream’ proxy (API only)Support for upstream proxies has landed in trunk. This iteration contains API only support. The team continues to work on the matching UI support for this feature.
  • Codebase transition from bzr to git – This week the team has focused efforts on updating all processes to the upcoming transition to Git. The progress so far is:
    • Prepared the MAAS CI infrastructure to fully support Git once the transition is complete.
    • Started working on creating new processes for PR’s auto-testing and landing.
  • Django 1.11 transition – The team continues to work through the Django 1.11 transition; we’re down to 130 unittest failures!
  • Network Beaconing & better network discovery – Prototype beacons have now been sent and received! The next steps will be to work on the full protocol implementation, followed by making use of beaconing to enhance rack registration. This will provide a better out-of-the-box experience for MAAS; interfaces which share network connectivity will no longer be assumed to be on separate fabrics.
  • Started the removal of ‘tgt’ as a dependency – We have started the removal of ‘tgt’ as a dependency. This simplies the boot process by not loading ephemeral images from tgt, but rather, having the initrd download and load the ephemeral environment.
  • UI Improvements
    • Performance Improvements – Improved the loading of elements in the Device Discovery, Node listing and Events page, which greatly improve UI performance.
    • LP #1695312 – The button to edit dynamic range says ‘Edit’ while it should say ‘Edit reserved range’
    • Remove auto-save on blur for the Fabric details summary row. Applied static content when not in edit mode.
Bug Fixes

The following issues have been fixed and backported to MAAS 2.2 branch. This will be available in the next point release of MAAS 2.2 (2.2.1) in the coming weeks:

  • LP: #1678339 – allow physical (and bond) interfaces to be placed on VLANs with a known 802.1q tag.
  • LP: #1652298 – Improve loading of elements in the device discovery page

Distributing KeePassXC as a snap

Ubuntu | canonical.com - Mon, 06/19/2017 - 19:07

This is a guest post by Jonathan White (find him on Github) one of the developers behind keepassxc. If you would like to contribute a guest post, please contact [email protected] .

Can you tell us about KeePassXC?

KeePassXC, for KeePass Cross-Platform Community Edition, is an extension of the KeePassX password manager project that incorporates major feature requests and bug fixes. We are an active open source project that is available on all Linux distributions, Windows XP to 10, and Macintosh OSX. Our main goal is to incorporate the features that the community wants while balancing portability, speed, and ease of use. Some of the major features that we have already shipped are browser integration, YubiKey authentication, and a redesigned interface.

How did you find out about snaps?

I learned about snaps through an article on Ars Technica about a year ago. Since then I dove into the world of building and deploying snaps through the KeePassXC application. We deployed our first snap version of the app in January 2017.

What was the appeal of snaps that made you decide to invest in them?

The novelty of bundling an application and deploying it to the Ubuntu Store, for free, was really attractive. It also meant we could bypass the lengthy review and approval process of the official apt repository.

How does building snaps compare to other forms of packaging you produce? How easy was it to integrate with your existing infrastructure and process?

The initial build of the snapcraft.yaml file was a bit rough. At the time, the documentation did not provide many full-text examples of different build patterns. It only took a couple of iterations before a successful snap was built and tested locally. The easiest part was publishing the snap for public consumption, that took a matter of minutes.

With the introduction of build.snapcraft.io, the integration with our workflow has improved greatly. Now we can publish snaps immediately upon completion of a milestone, or even intermediate builds for develop.

Do you currently use the snap store as a way of distributing your software? How do you see the store changing the way users find and install your software?

Yes, we use the snap store exclusively for our deployment. It is a critical tool for our distribution with over 18,000 downloads in less than 4 months! The store also ensures users have the latest version and it is always guaranteed to work on their system.

What release channels (edge/beta/candidate/stable) in the store are you using or plan to use?

We use the stable channel for milestone releases and the edge channel for intermediate builds (nightlies).

Is there any other software you develop that might also become available as a Snap in the future?

Not at this time, but if I ever publish another cross-platform tool, I will certainly use the ubuntu store and snap builds.

How do you think packaging KeePassXC as a snap helps your users? Did you get any feedback from them?

Our users are able to discover, download, and use our app in a matter of seconds through the Ubuntu store. Packaging as a snap also removes the dependency nightmare of different Linux distributions. Snap users easily find us on Github and provide feedback on their experience. Most of the issues we have run into involve theming, plugs, and keyboard shortcuts.

How would you improve the snap system?

First I would make it easier to navigate around the developer section of the ubuntu store. It is currently a little confusing on how to get to where your current snaps are. [Note: this is work in progress, stay tuned!]

As far as snaps themselves, I wish they were built more like docker containers where different layers could be combined dynamically to provide the final product. For example, our application uses Qt5 which causes the snap size to bloat up to 70 MB. Instead, the Qt5 binaries should be provided as an independent, shared snap that gets dynamically loaded with our application’s snap. This would greatly cut down on the size and compile time of the deployment; especially if you have multiple Qt apps which all carry their own unique build. [Note: Content interfaces were built for this purpose]

Reduce the number of plugs that require manual connection. It would also be helpful if there was a GUI for the user to enable plugs for specific snaps.

Finally, I had the opportunity to try out the new build.snapcraft.io tool. It seems like the perfect answer to keeping up to date with building and deploying snaps to the store. The only downside I found was that it was impossible to limit the building to just the master and develop branch. This caused over 20 builds to be performed due to how active our project was (PR’s, feature branches, etc). [Note: Great feedback! build.snapcraft.io is evolving this is definitely something we’ll look into]

Senior Drupal Developer position is open

Drupal | Job Search - Mon, 06/19/2017 - 13:38
Sydney, NSW, Australia

Ubuntu Server Development Summary – 16 Jun 2017

Ubuntu | canonical.com - Sat, 06/17/2017 - 02:48

The purpose of this weekly update is to make sure our community can follow development with toes dipped in before and between jumping headlong into helping shape Ubuntu Server!

Spotlight: Task Tracking

The Canonical Server Team is using Trello to track our weekly tasks. Feel free to take a peek and follow along on the Ubuntu Server Daily board.

cloud-init and curtin cloud-init
  • Uploaded package to Artful and supported releases proposed
  • Met with Redhat team to discuss packaging and release processes
  • Change config/cloud.cfg to act as template to allow downstream distributions to generate this for special needs
  • Added makefile target to install dependencies on various downstream distributions
  • Enable auto-generation of module docs from schema attribute if present
  • Change Redhat spec file based on init system
  • Convert templates from cheetah to jinja to allow building in python3 environments
  • Setup testing of daily cloud-init COPR builds
  • Fix LP: #1693361 race between apt-daily and cloud-init
  • Fix LP: #1686754 sysconfig renderer from leaving CIDR notation instead of netmask
  • Fix LP: #1686751 selinux issues while running under Redhat
curtin
  • Created PPA for MAAS passthrough networking test
  • Fix LP: #1645680 adding PPA due to new GPG agent
Bug Work and Triage
  • Extended Ubuntu Server triage tool to assist with expiration of bugs in backlog
  • Review expiring ubuntu-server subscribed bugs in backlog
  • Review server-next tagged bugs for priority and relevance
  • Triage samba bugs from backlog
  • 64 bugs reviewed, 1 accepted, 317 in the backlog
  • Notes on daily bug triage
IRC Meeting Ubuntu Server Packages

Below is a summary of uploads to the development and supported releases. Current status of the Debian to Ubuntu merges is tracked on the Merge-o-Matic page.

Uploads to the Development Release (Artful) billiard, 3.5.0.2-0ubuntu1, nacc celery, 4.0.2-0ubuntu1, nacc cloud-initramfs-tools, 0.38ubuntu1, smoser curtin, 0.1.0~bzr505-0ubuntu1, smoser lxcfs, 2.0.7-0ubuntu3, stgraber lxd, 2.14-0ubuntu4, stgraber lxd, 2.14-0ubuntu3, stgraber nss, 2:3.28.4-0ubuntu2, mdeslaur python-boto, 2.44.0-1ubuntu2, racb python-tornado, 4.5.1-0ubuntu1, mwhudson rrdtool, 1.6.0-1ubuntu1, vorlon ruby2.3, 2.3.3-1ubuntu1, mdeslaur samba, 2:4.5.8+dfsg-2ubuntu1, mdeslaur Total: 13 Uploads to Supported Releases (Trusty, Xenial, Yakkety, Zesty) cloud-init, xenial, 0.7.9-153-g16a7302f-0ubuntu1~16.04.1, smoser cloud-init, yakkety, 0.7.9-153-g16a7302f-0ubuntu1~16.10.1, smoser cloud-init, zesty, 0.7.9-153-g16a7302f-0ubuntu1~17.04.1, smoser ebtables, trusty, 2.0.10.4-3ubuntu1.14.04.1, slashd ebtables, xenial, 2.0.10.4-3.4ubuntu2, slashd ebtables, yakkety, 2.0.10.4-3.5ubuntu1.16.10.1, slashd ebtables, zesty, 2.0.10.4-3.5ubuntu1.17.04.1, slashd lxc, zesty, 2.0.8-0ubuntu1~17.04.2, stgraber lxc, yakkety, 2.0.8-0ubuntu1~16.10.2, stgraber lxc, xenial, 2.0.8-0ubuntu1~16.04.2, stgraber lxd, zesty, 2.14-0ubuntu3~17.04.1, stgraber lxd, yakkety, 2.14-0ubuntu3~16.10.1, stgraber lxd, xenial, 2.14-0ubuntu3~16.04.1, stgraber multipath-tools, yakkety, 0.5.0+git1.656f8865-5ubuntu7.3, cyphermox vlan, trusty, 1.9-3ubuntu10.4, slashd vlan, xenial, 1.9-3.2ubuntu1.16.04.3, slashd vlan, yakkety, 1.9-3.2ubuntu2.16.10.2, slashd vlan, zesty, 1.9-3.2ubuntu2.17.04.2, slashd Total: 18 Contact the Ubuntu Server team

Ubuntu Desktop Weekly Update: June 16, 2017

Ubuntu | canonical.com - Sat, 06/17/2017 - 00:13

GNOME
  • Further theme fixes have been made in Artful to get GNOME Shell and Ambiance looking just right.
  • Network Manager is updated to 1.8. It is currently awaiting the resolution of some test issues before it migrates to the release, but that should take place in the coming days.
  • GNOME Terminal received a small fix to make it easier to create custom terminals. Andy Whitcroft from the kernel team blogs about it here
LivePatch

Work is continuing on the Live Patch client UI. We can now install, enable and disable the Live Patch Snap from the Software Properties window. Next up will be showing notifications when the Live Patch service is protecting your computer.

Snaps
  • GNOME Software now works with the Snap Store to show promoted Snaps, or “Editors Picks”. This is released into Artful and other supported releases will follow.
  • We debugged and fixed some desktop Snap theming issues. There were some file sharing changes needed in snapd in the “Unity7” interface (which will need renaming) and these are now merged. More fixes to the desktop launcher scripts were done to provide further default theming, and these were added to the GNOME Platform Snap as well.
  • James Henstridge has been working on getting Snaps to work with Portals, and he’s making great progress. You can read more about it, and how to test it, here:
    https://forum.snapcraft.io/t/xdg-desktop-portal-proof-of-concept-demo/1027
QA

We’re reviewing and updating the desktop test plan. Once this is finalised (due next week) we’ll be announcing a call-for-testing programme with small, quick tests you can perform regularly and feedback your findings. This will help us to ensure the overall quality of the desktop images is kept high throughout the development cycle. More on this soon.

We’re also running our automated tests on real hardware with Intel, Nvidia and AMD graphics cards to cover the main bases.

Video Acceleration

We’re working through all the various links in the chain to get to a situation where we can playback video using hardware acceleration by default. At the moment our focus is getting it to work on Intel graphics hardware, but there are a few issues around using Intel’s SDK with open-source LibVA, but these are being worked on upstream:

https://github.com/Intel-Media-SDK/MediaSDK/issues/10

In the meantime you can read the current state of play here: https://wiki.ubuntu.com/IntelQuickSyncVideo

Updates
  • Chromium 59.0.3071.86 was promoted to stable, but we found a couple of issues. They’re being worked on right now and the test plan has been updated to catch them in the future.
  • Chromium beta is 60.0.3112.24 and dev is 61.0.3124.4.
  • Network Manager 1.8 has been merged from Debian into Artful.
  • BlueZ 5.45 made it out of testing into Artful.
  • Evolution got updated to the 3.24 series.

News

Gitter and Mattermost: two desktop apps for your future chat platform

Ubuntu | canonical.com - Fri, 06/16/2017 - 07:17

In the hunt for the perfect communication platform or protocol, a lot of companies are experimenting, which can lead to some confusion as not everyone is moving at the same pace: one team on IRC, another one on Slack, one on “anything but Slack, have you tried Mattermost? It’s almost like RocketChat”. Then, if a platform emerges victorious, come the clients wars: which version, does the web version has less features than the desktop client, what about the mobile client?

This post doesn’t intend to solve the conundrum, nor advocate for one platform over the others, as its author currently has 6 notifications on Telegram, 17 highlights on IRC, 1 mention on RocketChat and 2 on Slack.

What this post proposes is to have an easy and painless way to experience (and experiment with) some of these platforms. Electron applications are really useful when it comes to that, they integrate neatly into the desktop experience and find their place into most workflows.

Enter snaps

As of today, if you are a Mattermost or Gitter user, you can install their respective desktop client as snaps on all supported Linux distributions (including Fedora, openSUSE, Debian…).

Why snaps when these apps have packages available on their website and/or repository? Snaps mean you don’t have to care about updating them anymore, or look for the right binary to unpack, it also means they can be completely isolated from the parts of the filesystem you care about and that you can switch to the beta version, or even tip of master, in a single command, then rollback to stable if the version is broken.

Gitter

Website: gitter.im

Gitter is a rapidly growing platform primarily used to add a chat service to GitHub and Gitlab repositories. With over 800,000 users, Gitter has been recently acquired by Gitlab and is on its way to being open sourced.

To install Gitter as a snap, search for “gitter-desktop” in the Ubuntu Software Center, or on the command line:

sudo snap install gitter-desktop Mattermost

Website: about.mattermost.com

Mattermost is a highly extensible, open source, self-hosted communication platform, connecting to hundreds of cloud services and can be integrated with almost anything using webhooks, RESTful and language-specific APIs.

While the server itself can be installed in ten minutes with orchestration solutions such as Juju, you can also install the desktop client in a minute, with a single command.

To install Mattermost as a snap, search for “mattermost-desktop” in the Ubuntu Software Center, or on the command line:

sudo snap install mattermost-desktop Learning more about snaps

You can expect more desktop clients and more Electron apps in general to land in the Snap store in the next few weeks. If you want to give a go at snapping your own apps, you can find all the documentation on snapcraft.io, including your personal cross-architecture build farm.

To discuss snaps and snapcraft, you can reach out to the snap community and developers on… Discourse and IRC!

Juju 2.2.0 and conjure-up 2.2.0 are here!

Ubuntu | canonical.com - Fri, 06/16/2017 - 03:38

We are excited to announce the release of Juju 2.2.0 and conjure-up 2.2.0! This release greatly enhances memory and CPU utilisation at scale, improves the modelling of networks, and adds support for KVM containers on arm64. Additionally, there is now outline support for Oracle Compute, and vSphere clouds are now easier to deploy. conjure-up now supports Juju as a Service (JAAS), macOS clients, Oracle and vSphere clouds, and repeatable spell deployments.

How can I get it?

The best way to get your hands on this release of Juju and conjure-up is to install them via snap packages (see https://snapcraft.io/ for more info on snaps).

snap install juju --classic snap install conjure-up --classic

Other packages are available for a variety of platforms. Please see the online documentation at https://jujucharms.com/docs/stable/reference-install. Those subscribed to a snap channel should be automatically upgraded. If you’re using the ppa/homebrew, you should see an upgrade available.

Upgrading

Changes introduced in 2.2.0 mean that you should also upgrade any controllers and hosted models after installing the new client software. Please see the documentation at https://jujucharms.com/docs/2.2/models-upgrade#upgrading-the-model-software for more information.

New and Improved
  • Users can now deploy workloads to Centos7 machines on Azure.
  • vSphere Juju users with vCenter 5.5 and vCenter 6.0 can now bootstrap successfully and deploy workloads as well as have machines organised into folders.
  • Juju now has initial support for Oracle Cloud, https://jujucharms.com/docs/2.2/help-oracle.
  • Users of Azure can now benefit from better credential management support, we’ve eliminated the need to manually discover subscription ID in order to add an Azure credential. All you need is to have Azure CLI installed and regular Juju credential management commands will “Just Work”.
  • Juju login command now accepts the name or hostname of a public controller as a parameter. Passing a user to log in as has been moved to an option rather than a positional parameter.
  • Behavior for a Juju bootstrap argument ‘-metadata-source’ has changed. In addition to specifying a parent directory that contains “tools” and “images” subdirectories with metadata, this argument can now also point directly to one of these subdirectories if only one type of custom metadata is required. (lp:1696555)
  • Actions that require ‘sudo’ can now be used in conjure-up steps.
  • conjure-up now uses libjuju as its api client.
  • conjure-up can now deploy from release channels, e.g. ‘beta’. * There’s a new bootstrap configuration option, max-txn-log-size, that can be used to configure the size of the capped transaction log used internally by Juju. Larger deployments needed to be able to tune this setting; we don’t recommend setting this option without careful consideration.
  • General Juju log pruning policy can now be configured to specify maximum log entry age and log collection size, https://jujucharms.com/docs/2.2/controllers-config. 
  • Juju status history pruning policy can also be configured to specify maximum status entry age and status collection size, https://jujucharms.com/docs/2.2/models-config.
  • The ‘status –format=yaml’ and ‘show-machine’ commands now show more detailed information about individual machines’ network configuration.
  • Added support for AWS ‘ap-northeast-2’ region, and GCE ‘us-west1’, ‘asia-northeast1’ regions.
  • Actions have received some polish and can now be canceled, and showing a previously run action will include the name of the action along with the results..
  • Rotated Juju log files are now also compressed.
  • Updates to MAAS spaces and subnets can be made available to a Juju model using the new ‘reload-spaces’ command.
  • ‘unit-get private-address’ now uses the default binding for an application.
  • Juju models have always been internally identified by their owner and their short name. These full names have not been exposed well to the user but are now part of juju models and show-model command output.
Fixes
  • Juju more reliably determines whether to connect to the MAASv2 or MAASv1 API based on MAAS endpoint URL as well as the response received from MAAS.
  • Juju is now built with Go version 1.8 to take advantage of performance improvements.
  • Juju users will no longer be missing their firewall rules when adding a new machine on Azure.
  • Juju models with storage can now be cleanly destroyed.
  • Juju is now resilient to a MITM attack as SSH Keys of the bootstrap host are now verified before bootstrap (lp:1579593).
  • Root escalation vulnerability in ‘juju-run’ has been fixed (lp:1682411).
  • Juju’s agent presence data is now aggressively pruned, reducing controller disk space usage and avoiding associated performance issues.
  • MAAS 2.x block storage now works with physical disks, when MAAS reports the WWN unique identifier. (lp:1677001).
  • Automatic bridge names are now properly limited to 15 characters in Juju (lp:1672327).
  • Juju subordinate units are now removed as expected when their principal is removed (lp:1686696 and lp:1655486) You can check the milestones for a detailed breakdown of the Juju and conjure-up bugs we have fixed: https://launchpad.net/juju/+milestone/2.2.0 https://github.com/conjure-up/conjure-up/milestone/19?closed=1
Known issues Feedback Appreciated!

We encourage everyone to let us know how you’re using Juju. Join us at regular Juju shows – subscribe to our Youtube channel https://youtube.com/jujucharms

Send us a message on Twitter using #jujucharms, join us at #juju on freenode, and subscribe to the mailing list at juju at lists.ubuntu.com.

https://jujucharms.com/docs/stable/contact-us

More information

To learn more about these great technologies please visit https://jujucharms.com and http://conjure-up.io

Custom user mappings in LXD containers

Ubuntu | canonical.com - Fri, 06/16/2017 - 01:27

Introduction

As you may know, LXD uses unprivileged containers by default.
The difference between an unprivileged container and a privileged one is whether the root user in the container is the “real” root user (uid 0 at the kernel level).

The way unprivileged containers are created is by taking a set of normal UIDs and GIDs from the host, usually at least 65536 of each (to be POSIX compliant) and mapping those into the container.

The most common example and what most LXD users will end up with by default is a map of 65536 UIDs and GIDs, with a host base id of 100000. This means that root in the container (uid 0) will be mapped to the host uid 100000 and uid 65535 in the container will be mapped to uid 165535 on the host. UID/GID 65536 and higher in the container aren’t mapped and will return an error if you attempt to use them.

From a security point of view, that means that anything which is not owned by the users and groups mapped into the container will be inaccessible. Any such resource will show up as being owned by uid/gid “-1” (rendered as 65534 or nobody/nogroup in userspace). It also means that should there be a way to escape the container, even root in the container would find itself with just as much privileges on the host as a nobody user.

LXD does offer a number of options related to unprivileged configuration:

  • Increasing the size of the default uid/gid map
  • Setting up per-container maps
  • Punching holes into the map to expose host users and groups
  Increasing the size of the default map

As mentioned above, in most cases, LXD will have a default map that’s made of 65536 uids/gids.

In most cases you won’t have to change that. There are however a few cases where you may have to:

  • You need access to uid/gid higher than 65535.
    This is most common when using network authentication inside of your containers.
  • You want to use per-container maps.
    In which case you’ll need 65536 available uid/gid per container.
  • You want to punch some holes in your container’s map and need access to host uids/gids.

The default map is usually controlled by the “shadow” set of utilities and files. On systems where that’s the case, the “/etc/subuid” and “/etc/subgid” files are used to configure those maps.

On systems that do not have a recent enough version of the “shadow” package. LXD will assume that it doesn’t have to share uid/gid ranges with anything else and will therefore assume control of a billion uids and gids, starting at the host uid/gid 100000.

But the common case, is a system with a recent version of shadow.
An example of what the configuration may look like is:

[email protected]:~$ cat /etc/subuid lxd:100000:65536 root:100000:65536 [email protected]:~$ cat /etc/subgid lxd:100000:65536 root:100000:65536

The maps for “lxd” and “root” should always be kept in sync. LXD itself is restricted by the “root” allocation. The “lxd” entry is used to track what needs to be removed if LXD is uninstalled.

Now if you want to increase the size of the map available to LXD. Simply edit both of the files and bump the last value from 65536 to whatever size you need. I tend to bump it to a billion just so I don’t ever have to think about it again:

[email protected]:~$ cat /etc/subuid lxd:100000:1000000000 root:100000:1000000000 [email protected]:~$ cat /etc/subgid lxd:100000:1000000000 root:100000:100000000

After altering those files, you need to restart LXD to have it detect the new map:

[email protected]:~# systemctl restart lxd [email protected]:~# cat /var/log/lxd/lxd.log lvl=info msg="LXD 2.14 is starting in normal mode" path=/var/lib/lxd t=2017-06-14T21:21:13+0000 lvl=warn msg="CGroup memory swap accounting is disabled, swap limits will be ignored." t=2017-06-14T21:21:13+0000 lvl=info msg="Kernel uid/gid map:" t=2017-06-14T21:21:13+0000 lvl=info msg=" - u 0 0 4294967295" t=2017-06-14T21:21:13+0000 lvl=info msg=" - g 0 0 4294967295" t=2017-06-14T21:21:13+0000 lvl=info msg="Configured LXD uid/gid map:" t=2017-06-14T21:21:13+0000 lvl=info msg=" - u 0 1000000 1000000000" t=2017-06-14T21:21:13+0000 lvl=info msg=" - g 0 1000000 1000000000" t=2017-06-14T21:21:13+0000 lvl=info msg="Connecting to a remote simplestreams server" t=2017-06-14T21:21:13+0000 lvl=info msg="Expiring log files" t=2017-06-14T21:21:13+0000 lvl=info msg="Done expiring log files" t=2017-06-14T21:21:13+0000 lvl=info msg="Starting /dev/lxd handler" t=2017-06-14T21:21:13+0000 lvl=info msg="LXD is socket activated" t=2017-06-14T21:21:13+0000 lvl=info msg="REST API daemon:" t=2017-06-14T21:21:13+0000 lvl=info msg=" - binding Unix socket" socket=/var/lib/lxd/unix.socket t=2017-06-14T21:21:13+0000 lvl=info msg=" - binding TCP socket" socket=[::]:8443 t=2017-06-14T21:21:13+0000 lvl=info msg="Pruning expired images" t=2017-06-14T21:21:13+0000 lvl=info msg="Updating images" t=2017-06-14T21:21:13+0000 lvl=info msg="Done pruning expired images" t=2017-06-14T21:21:13+0000 lvl=info msg="Done updating images" t=2017-06-14T21:21:13+0000 [email protected]:~#

As you can see, the configured map is logged at LXD startup and can be used to confirm that the reconfiguration worked as expected.

You’ll then need to restart your containers to have them start using your newly expanded map.

  Per container maps

Provided that you have a sufficient amount of uid/gid allocated to LXD, you can configure your containers to use their own, non-overlapping allocation of uids and gids.

This can be useful for two reasons:

  1. You are running software which alters kernel resource ulimits.
    Those user-specific limits are tied to a kernel uid and will cross container boundaries leading to hard to debug issues where one container can perform an action but all others are then unable to do the same.
  2. You want to know that should there be a way for someone in one of your containers to somehow get access to the host that they still won’t be able to access or interact with any of the other containers.

The main downsides to using this feature are:

  • It’s somewhat wasteful with using 65536 uids and gids per container.
    That being said, you’d still be able to run over 60000 isolated containers before running out of system uids and gids.
  • It’s effectively impossible to share storage between two isolated containers as everything written by one will be seen as -1 by the other. There is ongoing work around virtual filesystems in the kernel that will eventually let us get rid of that limitation.

To have a container use its own distinct map, simply run:

[email protected]:~$ lxc config set test security.idmap.isolated true [email protected]:~$ lxc restart test [email protected]:~$ lxc config get test volatile.last_state.idmap [{"Isuid":true,"Isgid":false,"Hostid":165536,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":165536,"Nsid":0,"Maprange":65536}]

The restart step is needed to have LXD remap the entire filesystem of the container to its new map.
Note that this step will take a varying amount of time depending on the number of files in the container and the speed of your storage.

As can be seen above, after restart, the container is shown to have its own map of 65536 uids/gids.

If you want LXD to allocate more than the default 65536 uids/gids to an isolated container, you can bump the size of the allocation with:

[email protected]:~$ lxc config set test security.idmap.size 200000 [email protected]:~$ lxc restart test [email protected]:~$ lxc config get test volatile.last_state.idmap [{"Isuid":true,"Isgid":false,"Hostid":165536,"Nsid":0,"Maprange":200000},{"Isuid":false,"Isgid":true,"Hostid":165536,"Nsid":0,"Maprange":200000}]

If you’re trying to allocate more uids/gids than are left in LXD’s allocation, LXD will let you know:

[email protected]:~$ lxc config set test security.idmap.size 2000000000 error: Not enough uid/gid available for the container.   Direct user/group mapping

The fact that all uids/gids in an unprivileged container are mapped to a normally unused range on the host means that sharing of data between host and container is effectively impossible.

Now, what if you want to share your user’s home directory with a container?

The obvious answer to that is to define a new “disk” entry in LXD which passes your home directory to the container:

[email protected]:~$ lxc config device add test home disk source=/home/stgraber path=/home/ubuntu Device home added to test

So that was pretty easy, but did it work?

[email protected]:~$ lxc exec test -- bash [email protected]:~# ls -lh /home/ total 529K drwx--x--x 45 nobody nogroup 84 Jun 14 20:06 ubuntu

No. The mount is clearly there, but it’s completely inaccessible to the container.
To fix that, we need to take a few extra steps:

  • Allow LXD’s use of our user uid and gid
  • Restart LXD to have it load the new map
  • Set a custom map for our container
  • Restart the container to have the new map apply
[email protected]:~$ printf "lxd:$(id -u):1\nroot:$(id -u):1\n" | sudo tee -a /etc/subuid lxd:201105:1 root:201105:1 [email protected]:~$ printf "lxd:$(id -g):1\nroot:$(id -g):1\n" | sudo tee -a /etc/subgid lxd:200512:1 root:200512:1 [email protected]:~$ sudo systemctl restart lxd [email protected]:~$ printf "uid $(id -u) 1000\ngid $(id -g) 1000" | lxc config set test raw.idmap - [email protected]:~$ lxc restart test

At which point, things should be working in the container:

[email protected]:~$ lxc exec test -- su ubuntu -l [email protected]:~$ ls -lh total 119K drwxr-xr-x 5 ubuntu ubuntu 8 Feb 18 2016 data drwxr-x--- 4 ubuntu ubuntu 6 Jun 13 17:05 Desktop drwxr-xr-x 3 ubuntu ubuntu 28 Jun 13 20:09 Downloads drwx------ 84 ubuntu ubuntu 84 Sep 14 2016 Maildir drwxr-xr-x 4 ubuntu ubuntu 4 May 20 15:38 snap [email protected]:~$   Conclusion

User namespaces, the kernel feature that makes those uid/gid mappings possible is a very powerful tool which finally made containers on Linux safe by design. It is however not the easiest thing to wrap your head around and all of that uid/gid map math can quickly become a major issue.

In LXD we’ve tried to expose just enough of those underlying features to be useful to our users while doing the actual mapping math internally. This makes things like the direct user/group mapping above significantly easier than it otherwise would be.

Going forward, we’re very interested in some of the work around uid/gid remapping at the filesystem level, this would let us decouple the on-disk user/group map from that used for processes, making it possible to share data between differently mapped containers and alter the various maps without needing to also remap the entire filesystem.

  Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Discussion forun: https://discuss.linuxcontainers.org
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Web Coordinator position is open @threeriverspark

Drupal | Job Search - Thu, 06/15/2017 - 23:20
Plymouth, MN, United States

IBM & Canonical: A Virtualization and Cloud Computing (R-)Evolution

Ubuntu | canonical.com - Thu, 06/15/2017 - 23:05

As modern IT evolves, there have been many milestones along the way. Starting with bare metal servers, followed by virtualization, then cloud computing, and beyond. Each advancement has created both challenges and opportunities for IT professionals. Today the industry is focused on deploying solutions that will improve overall IT operations while reducing overhead. Orchestration and modeling solutions help these organizations to integrate, manage, and deploy flexible solutions faster and with more consistency. Canonical and IBM have partnered to help our mutual customers with advanced virtualisation solutions on the IBM z and LinuxONE platforms.

How IBM and Canonical bring virtualisation options to their customers

[https://help.ubuntu.com/lts/serverguide/virtualization.html]

The combination of Ubuntu Server with the highly virtualized platforms IBM LinuxONE and IBM z Systems offers a highly competitive set of capabilities and virtualization options that provide many of the scalability, security, and isolation needs of our customers.

The IBM LinuxONE and z Systems come already with the Processor Resource and Service Management (PR/SM) or Dynamic Partition Manager (DPM), built into firmware, providing creation and management of up to 85 logical partitions in the high-end models such as IBM z13 or IBM LinuxONE Emperor. Although there are no real ‘bare metal’ options with IBM LinuxONE and z Systems available, there are several options for running Ubuntu Server on LinuxONE and z Systems for example:

  • A: In the logical partition(s) which is as close to the hardware as it gets without being on bare metal
  • B: As a guest (aka virtual machine) under IBM z/VM, IBM’s commercially available hypervisor.

The newer options, open source based, are:

  • C: As a virtual machine (VM) under KVM hypervisor
  • D: As a machine container

But who will provide the KVM hypervisor and a machine containers foundation? Right out of the box, Ubuntu Server itself comes with:

  • A built-in KVM with the well-known Linux full-virtualization concept with the same functions, look and feel across all architectures, but exploiting hardware assisted virtualisation of s390x architecture (SIE instruction).
  • LXD “the container hypervisor”, is a lightweight operating-system virtualization concept based on Linux Containers technology, enabling organisations to run Linux VMs straight to (machine) containers

The timeliness of the Ubuntu operating system itself ensures that our KVM is on the latest functional level, especially the s390x bits and pieces are frequently brought up-stream by IBM – hence this hand in hand delivery lets you get the most out of KVM, regardless how it’s used:

  • using virsh only
  • using uvtool and
  • using virtinst

Ubuntu Server actings as a KVM host is more than a valid and up-to-date alternative to other KVM based solutions on s390x – like IBM KVM for z Systems, that was withdrawn from marketing by IBM on March the 7th 2017 – see the KVM for IBM z Systems External Frequently Asked Questions.

For technical details and help on migrating from KVM for IBM z to Ubuntu KVM the Wiki page IBM KVM to Ubuntu KVM is recommended. It provides a quick technical summary of what an administrator should consider when trying to move guests from IBM-KVM to Ubuntu KVM.

LXD delivers a fast, dense and secure basic container management. Containers in LXD pack workloads up to 10x more densely compared to VMs – hence LXD is a perfect way to utilize hardware more efficiently. And similar to KVMs, with LXD you can run different Ubuntu releases or even other Linux distributions inside of LXD machine containers.

The significant advantage of open source based virtualization and container options, KVM and LXD, is that both are recognized by OpenStack. KVM is the default hypervisor for OpenStack and LXD can be integrated using nova-lxd driver.

A key benefit of an Optimized Deployment, includes provisioning, orchestration and modelling provided by Juju and it’s Charms and Bundles, which are sets of scripts for reliably and repeatedly deploying, scaling and managing services within Juju. Even just the combination of Juju and LXD based on Ubuntu Server can be considered a basic Cloud, where each new instance will run inside of a LXD container. Juju considers Ubuntu Server, with LXD enabled, as a cloud enabler where each new instance is implemented as a LXD container. Just install Ubuntu Server in a LPAR to fully benefit from a scale-up architecture of IBM LinuxONE and z Systems and the bare metal performance of LXD containers.

To conclude, there are many approaches to virtualisation and each has its own characteristics. For example:

  • LPARs are as close as possible to bare metal on IBM z Systems and LinuxONE
  • Ubuntu Server can act as KVM hypervisor and be integrated with OpenStack
  • Containers can be combined with any of the other virtualization options

There are also significant advantages:

  • Efficiency / Flexibility: LXD > KVM > LPAR
  • Isolation: LPAR > KVM > LXD

With the robustness of IBM LinuxONE and z Systems platforms, the combination of different virtualization and management options, even inside one physical server, orchestrated by Juju offer a broad range of options for modeling within the enterprise according to the customer’s specific needs.

Custom user mappings in LXD containers

Ubuntu | canonical.com - Thu, 06/15/2017 - 21:30
Introduction

As you may know, LXD uses unprivileged containers by default. The difference between an unprivileged container and a privileged one is whether the root user in the container is the “real” root user (uid 0 at the kernel level).

The way unprivileged containers are created is by taking a set of normal UIDs and GIDs from the host, usually at least 65536 of each (to be POSIX compliant) and mapping those into the container.

The most common example and what most LXD users will end up with by default is a map of 65536 UIDs and GIDs, with a host base id of 100000. This means that root in the container (uid 0) will be mapped to the host uid 100000 and uid 65535 in the container will be mapped to uid 165535 on the host. UID/GID 65536 and higher in the container aren’t mapped and will return an error if you attempt to use them.

From a security point of view, that means that anything which is not owned by the users and groups mapped into the container will be inaccessible. Any such resource will show up as being owned by uid/gid “-1” (rendered as 65534 or nobody/nogroup in userspace). It also means that should there be a way to escape the container, even root in the container would find itself with just as much privileges on the host as a nobody user.

LXD does offer a number of options related to unprivileged configuration:

  • Increasing the size of the default uid/gid map
  • Setting up per-container maps
  • Punching holes into the map to expose host users and groups
Increasing the size of the default map

As mentioned above, in most cases, LXD will have a default map that’s made of 65536 uids/gids.

In most cases you won’t have to change that. There are however a few cases where you may have to:

  • You need access to uid/gid higher than 65535. This is most common when using network authentication inside of your containers.
  • You want to use per-container maps. In which case you’ll need 65536 available uid/gid per container.
  • You want to punch some holes in your container’s map and need access to host uids/gids.

The default map is usually controlled by the “shadow” set of utilities and files. On systems where that’s the case, the “/etc/subuid” and “/etc/subgid” files are used to configure those maps.

On systems that do not have a recent enough version of the “shadow” package. LXD will assume that it doesn’t have to share uid/gid ranges with anything else and will therefore assume control of a billion uids and gids, starting at the host uid/gid 100000.

But the common case, is a system with a recent version of shadow.

An example of what the configuration may look like is:

[email protected]:~$ cat /etc/subuid lxd:100000:65536 root:100000:65536 [email protected]:~$ cat /etc/subgid lxd:100000:65536 root:100000:65536

The maps for “lxd” and “root” should always be kept in sync. LXD itself is restricted by the “root” allocation. The “lxd” entry is used to track what needs to be removed if LXD is uninstalled.

Now if you want to increase the size of the map available to LXD. Simply edit both of the files and bump the last value from 65536 to whatever size you need. I tend to bump it to a billion just so I don’t ever have to think about it again:

[email protected]:~$ cat /etc/subuid lxd:100000:1000000000 root:100000:1000000000 [email protected]:~$ cat /etc/subgid lxd:100000:1000000000 root:100000:100000000

After altering those files, you need to restart LXD to have it detect the new map:

[email protected]:~# systemctl restart lxd [email protected]:~# cat /var/log/lxd/lxd.log lvl=info msg="LXD 2.14 is starting in normal mode" path=/var/lib/lxd t=2017-06-14T21:21:13+0000 lvl=warn msg="CGroup memory swap accounting is disabled, swap limits will be ignored." t=2017-06-14T21:21:13+0000 lvl=info msg="Kernel uid/gid map:" t=2017-06-14T21:21:13+0000 lvl=info msg=" - u 0 0 4294967295" t=2017-06-14T21:21:13+0000 lvl=info msg=" - g 0 0 4294967295" t=2017-06-14T21:21:13+0000 lvl=info msg="Configured LXD uid/gid map:" t=2017-06-14T21:21:13+0000 lvl=info msg=" - u 0 1000000 1000000000" t=2017-06-14T21:21:13+0000 lvl=info msg=" - g 0 1000000 1000000000" t=2017-06-14T21:21:13+0000 lvl=info msg="Connecting to a remote simplestreams server" t=2017-06-14T21:21:13+0000 lvl=info msg="Expiring log files" t=2017-06-14T21:21:13+0000 lvl=info msg="Done expiring log files" t=2017-06-14T21:21:13+0000 lvl=info msg="Starting /dev/lxd handler" t=2017-06-14T21:21:13+0000 lvl=info msg="LXD is socket activated" t=2017-06-14T21:21:13+0000 lvl=info msg="REST API daemon:" t=2017-06-14T21:21:13+0000 lvl=info msg=" - binding Unix socket" socket=/var/lib/lxd/unix.socket t=2017-06-14T21:21:13+0000 lvl=info msg=" - binding TCP socket" socket=[::]:8443 t=2017-06-14T21:21:13+0000 lvl=info msg="Pruning expired images" t=2017-06-14T21:21:13+0000 lvl=info msg="Updating images" t=2017-06-14T21:21:13+0000 lvl=info msg="Done pruning expired images" t=2017-06-14T21:21:13+0000 lvl=info msg="Done updating images" t=2017-06-14T21:21:13+0000 [email protected]:~#

As you can see, the configured map is logged at LXD startup and can be used to confirm that the reconfiguration worked as expected.

You’ll then need to restart your containers to have them start using your newly expanded map.

Per container maps

Provided that you have a sufficient amount of uid/gid allocated to LXD, you can configure your containers to use their own, non-overlapping allocation of uids and gids.

This can be useful for two reasons:

  1. You are running software which alters kernel resource ulimits. Those user-specific limits are tied to a kernel uid and will cross container boundaries leading to hard to debug issues where one container can perform an action but all others are then unable to do the same.
  2. You want to know that should there be a way for someone in one of your containers to somehow get access to the host that they still won’t be able to access or interact with any of the other containers.
The main downsides to using this feature are:
  • It’s somewhat wasteful with using 65536 uids and gids per container. That being said, you’d still be able to run over 60000 isolated containers before running out of system uids and gids.
  • It’s effectively impossible to share storage between two isolated containers as everything written by one will be seen as -1 by the other. There is ongoing work around virtual filesystems in the kernel that will eventually let us get rid of that limitation.

To have a container use its own distinct map, simply run:

[email protected]:~$ lxc config set test security.idmap.isolated true [email protected]:~$ lxc restart test [email protected]:~$ lxc config get test volatile.last_state.idmap [{"Isuid":true,"Isgid":false,"Hostid":165536,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":165536,"Nsid":0,"Maprange":65536}]

The restart step is needed to have LXD remap the entire filesystem of the container to its new map. Note that this step will take a varying amount of time depending on the number of files in the container and the speed of your storage.

As can be seen above, after restart, the container is shown to have its own map of 65536 uids/gids.

If you want LXD to allocate more than the default 65536 uids/gids to an isolated container, you can bump the size of the allocation with:

[email protected]:~$ lxc config set test security.idmap.size 200000 [email protected]:~$ lxc restart test [email protected]:~$ lxc config get test volatile.last_state.idmap [{"Isuid":true,"Isgid":false,"Hostid":165536,"Nsid":0,"Maprange":200000},{"Isuid":false,"Isgid":true,"Hostid":165536,"Nsid":0,"Maprange":200000}] If you’re trying to allocate more uids/gids than are left in LXD’s allocation, LXD will let you know: [email protected]:~$ lxc config set test security.idmap.size 2000000000 error: Not enough uid/gid available for the container. Direct user/group mapping

The fact that all uids/gids in an unprivileged container are mapped to a normally unused range on the host means that sharing of data between host and container is effectively impossible.

Now, what if you want to share your user’s home directory with a container?

The obvious answer to that is to define a new “disk” entry in LXD which passes your home directory to the container:

[email protected]:~$ lxc config device add test home disk source=/home/stgraber path=/home/ubuntu Device home added to test So that was pretty easy, but did it work? [email protected]:~$ lxc exec test -- bash [email protected]:~# ls -lh /home/ total 529K drwx--x--x 45 nobody nogroup 84 Jun 14 20:06 ubuntu

No. The mount is clearly there, but it’s completely inaccessible to the container. To fix that, we need to take a few extra steps:

  • Allow LXD’s use of our user uid and gid
  • Restart LXD to have it load the new map
  • Set a custom map for our container
  • Restart the container to have the new map apply
[email protected]:~$ printf "lxd:$(id -u):1\nroot:$(id -u):1\n" | sudo tee -a /etc/subuid lxd:201105:1 root:201105:1 [email protected]:~$ printf "lxd:$(id -g):1\nroot:$(id -g):1\n" | sudo tee -a /etc/subgid lxd:200512:1 root:200512:1 [email protected]:~$ sudo systemctl restart lxd [email protected]:~$ printf "uid $(id -u) 1000\ngid $(id -g) 1000" | lxc config set test raw.idmap - [email protected]:~$ lxc restart test

At which point, things should be working in the container:

[email protected]:~$ lxc exec test -- su ubuntu -l [email protected]:~$ ls -lh total 119K drwxr-xr-x 5 ubuntu ubuntu 8 Feb 18 2016 data drwxr-x--- 4 ubuntu ubuntu 6 Jun 13 17:05 Desktop drwxr-xr-x 3 ubuntu ubuntu 28 Jun 13 20:09 Downloads drwx------ 84 ubuntu ubuntu 84 Sep 14 2016 Maildir drwxr-xr-x 4 ubuntu ubuntu 4 May 20 15:38 snap [email protected]:~$ Conclusion

User namespaces, the kernel feature that makes those uid/gid mappings possible is a very powerful tool which finally made containers on Linux safe by design. It is however not the easiest thing to wrap your head around and all of that uid/gid map math can quickly become a major issue.

In LXD we’ve tried to expose just enough of those underlying features to be useful to our users while doing the actual mapping math internally. This makes things like the direct user/group mapping above significantly easier than it otherwise would be.

Going forward, we’re very interested in some of the work around uid/gid remapping at the filesystem level, this would let us decouple the on-disk user/group map from that used for processes, making it possible to share data between differently mapped containers and alter the various maps without needing to also remap the entire filesystem.

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd

Development happens on Github at: https://github.com/lxc/lxd

Discussion forum: https://discuss.linuxcontainers.org

Mailing-list support happens on: https://lists.linuxcontainers.org

IRC support happens in: #lxcontainers on irc.freenode.net

Try LXD online: https://linuxcontainers.org/lxd/try-it

Kernel Team Summary- June 15, 2017

Ubuntu | canonical.com - Thu, 06/15/2017 - 21:03

Introduction

This blog is to provide a status update from the Ubuntu Kernel Team. There will also be highlights provided for any interesting subjects the team may be working on. If you would like to reach the kernel team, you can find us at the #ubuntu-kernel channel on FreeNode. Alternatively, you can mail the Ubuntu Kernel Team mailing list at: [email protected]

Highlights
  • Unstable updated to 4.12-rc5
  • Virtualbox and zfs enabled in unstable/4.12
  • artful/4.11 updated to 4.11.4
  • Stress-ng 0.08.04 uploaded
  • Add new softlockup stressor, use with caution(!)
  • This is going to the be the first of a bunch of RT stressors

The following kernels were promoted to -proposed for testing:

  • Zesty 4.10.0-23.25
  • Yakkety 4.8.0-55.58
  • Xenial 4.4.0-80.101
  • Trusty 3.13.0-120.167

 

The following kernels were promoted to -proposed for testing:

  • trusty/linux-lts-xenial 4.4.0-80.101~14.04.1
  • xenial/linux-hwe-edge 4.10.0-23.25~16.04.1
  • xenial/linux-hwe 4.8.0-55.58~16.04.1
  • xenial/linux-raspi2 4.4.0-1058.65
  • xenial/linux-snapdragon 4.4.0-1060.64
  • xenial/linux-aws 4.4.0-1019.28
  • xenial/linux-gke 4.4.0-1015.15
  • xenial/linux-joule 4.4.0-1002.7
  • yakkety/linux-raspi2 4.8.0-1039.42
  • zesty/linux-raspi2 4.10.0-1007.9

 

The following kernel snaps were uploaded to the store:

  • pc-kernel 4.4.0-79.100
  • pi2-kernel 4.4.0-1057.64
  • dragonboard-kernel 4.4.0-1059.63

 

Devel Kernel Announcements

The 4.11 kernel in artful-proposed has been updated to 4.11.4. It is also available for testing in the following PPA: https://launchpad.net/~canonical-kernel-team/+archive/ubuntu/proposed

We intend to target a 4.13 kernel for the Ubuntu 17.10 release. The Ubuntu 17.10 Kernel Freeze is Thurs Oct 5, 2017.

Stable Kernel Announcements Current cycle: 02-Jun through 24-Jun
  • 02-Jun Last day for kernel commits for this cycle
  • 05-Jun – 10-Jun Kernel prep week
  • 11-Jun – 23-Jun Bug verification & Regression testing
  • 26-Jun Release to -updates.
Kernel Versions
  • precise 3.2.0-126.169
  • trusty 3.13.0-119.166
  • vivid 3.19.0-84.92
  • xenial 4.4.0-78.99
  • yakkety 4.8.0-53.56
  • linux-lts-trusty 3.13.0-117.164~precise1
  • linux-lts-vivid 3.19.0-80.88~14.04.1
  • linux-lts-xenial 4.4.0-78.99~14.04.1
Next cycle: 23-Jun through 15-Jul

23-Jun Last day for kernel commits for this cycle 26-Jun – 01-Jul Kernel prep week. 02-Jul – 14-Jul Bug verification & Regression testing.. 17-Jul Release to -updates.

Status: CVE’s

The current CVE status can be reviewed at the following: http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html

Team Lead / Senior Drupal Developer with 5-7yrs Exp position is open

Drupal | Job Search - Thu, 06/15/2017 - 20:30
Coimbatore, Tamilnadu, India
Syndicate content