This material is not sponsored by, endorsed by, or affiliated with Cisco Systems, Inc., Cisco, Cisco Systems, and the Cisco Systems logo are trademarks or registered trade marks of Cisco Systems, Inc. or its affiliates. All other trademarks are trademarks of their respective owners. The list of Cisco trademarks is here

Reader assumes responsibility for use of information contained herein. The author reserves the right to make changes without notice.

Copyright 2010-2017, Gianrico Fichera


Last update: 9 Feb 2017
Previous update: March 15, 2010

 

What is IOS-XR

    IOS-XR is a Cisco operating system running in the Cisco XR12000, Cisco CRS-X, Cisco ASR and Cisco NCS series. Before the release 6.0  it was based on the QNX neutrino RTOS microkernel. The CRS-X (introduced in 2004) system adopt the Intel Xeon processor. The ASR use the Cisco Flow processor, with 40 cores and 4 threads capacity for core (40x4 packet processing at once). From the release 6.0 Cisco use Openembedded linux.

Cisco 32 bit QNX neutrino

1.)   IOS-XR is a microkernel operating system type architecture.

  Operating systems can be either with a monolithic kernel or with microkernel. In the first case, the operating system processes run in the space reserved for the kernel space while the user processes run in a separate space reserved for them (user space). In microkernel only a minimal core of the operating system runs in kernel space. All other processes runs in user space. In this way, only a microkernel crash could lock the system, but this 'may' be avoided by carefully designing a limited portion of code. All other processes (such as device drivers and protocol stacks), run in user space and thus can be controlled individually. A crash of a process does not block the system. The disadvantage of microkernel is the need for a greater number of memory and processor resources. In fact, the communication between separate processes is inevitably slower than in a monolithic kernel. User space processes should make library calls to the kernel for getting hardware access, because only kernelspace processes can get hardware access directly. The QNX microkernel is very small (I read about 10KB! you can see details here)

   Cisco IOS isn't a microkernel OS. Single process crash can take down whole system. All processes share the same memory space. There isn't memory isolation.

2.)   IOS-XR is a pre-emptive operating system type architecture. This means that there is a scheduler, a process running in kernel space, responsible of the division of time between processes running on your system. This process use a context-switch algorithm to save the process status and restore it later. In cooperative multitasking instead is responsibility of each process to release resources for the system to make room for other processes. It is obvious that if a process is not working properly the whole system may crash and have to be restarted. IOS is a cooperative multitasking (like the first 16bit Windows operating systems ). IOS-XR is a pre-emptive multitasking operating system (such as a Unix system or Windows 2000 for example)

3.)  IOS-XR use memory protection between processes. This means that a process is unable to access memory that hasn't been allocated for it.


Cisco 64 bit QNX OpenEmbedded Linux (from 6.0 IOS-XR release)

http://www.cisco.com/assets/global/DK/seminarer/pdfs/XR60.pdf

YOCTO project   <--- usato da Cisco per preparare il proprio OpenEmbedded Linux


----------------- Virtualizzazione ----------------
Aggiornato: Febbraio 2017

CISCO

- Cisco chiama "Open Service Containers" la funzionalita' con la quale in un router possono essere lanciati dei container per avere
delle funzionalita' aggiuntive. Questi container possono essere sia forniti da Cisco che creati dall'utente (per questo Open). Un container puo' fare qualsiasi cosa ad esempio File server o DNS server etc.);

- Cisco utilizza sia LxC (Linux virtual Containers) che KVM nei suo prodotti. In IOS-XR dalla versione 6.0 ho visto che si utilizza LxC.
Anche i sistemi operativi come IOS-XE dalla 3.17-16.2 (Nov. 2015) permettono l'uso di LxC e KVM sulla serie ISR4000 (dal 4321-ISR con la giusta DRAM e storage NIM-SSD) e ASR1000, anche CSR1000V;  (Vedi qui per i dettagli dell'hardware e su come usarlo)

- Con Cisco KVM si aspetta le macchine virtuali nel formato OVA che al suo interno, tra gli altri, ha il file di configurazione package.yaml

Per creare le macchine, l'hardware e il resto:  Clicca qui


LINUX

- I containers sono piu' veloci delle macchine virtuali VMWARE o KVM perche' utlizzano gli stessi drive hardware della macchina ospite invece di creare un layer di emulazione hardware o dello storage. Inoltre possono copiarsi facilmente tra un sistema e l'altro (purche' su entrambi ci sia installato LxC)

- Che differenza c'e' tra LxC e Dockers? LxC puo' lanciare conteiner che eseguono processi multipli (anche quindi un'altro Linux) mentre Dockers sono container che contengono un singolo processo. Cosi' per utilizzare un wordpress in Docker crei piu' container, ad esempio uno con PHP, uno con MySQL, uno con apache e poi li fai comunicare tra loro. In alternativa ci sono degli altri metodi, ma Docker non nasce per poter fare questo. Insomma LxC sembra piu' potente di Docker che e' sembrerebbe un prodotto commerciale;

- Installazione su CentOS 7: (http://www.tecmint.com/install-create-run-lxc-linux-containers-on-centos/)

1)
yum install epel-release   <-- ATTENZIONE SENZA QUESTA RIGA LA SUCCESSIVA INSTALLA DOCKER!!!
yum install lxc
yum install lxc-templates
yum install debootstrap perl libvirt


systemctl enable lxc.service
systemctl start lxc.service
systemctl enable libvirtd
systemctl start libvirtd

systemctl status lxc.service
systemctl statuc libvirtd.service

2)

lxc-checkconfig

Kernel configuration not found at /proc/config.gz; searching...
Kernel configuration found at /boot/config-3.10.0-327.36.2.el7.x86_64
--- Namespaces ---
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: enabled
Network namespace: enabled
Multiple /dev/pts instances: enabled
--- Control groups ---
Cgroup: enabled
Cgroup clone_children flag: enabled
Cgroup device: enabled
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: enabled
Cgroup cpuset: enabled
--- Misc ---
Veth pair device: enabled
Macvlan: enabled
Vlan: enabled
Bridges: enabled
Advanced netfilter: enabled
CONFIG_NF_NAT_IPV4: enabled
CONFIG_NF_NAT_IPV6: enabled
CONFIG_IP_NF_TARGET_MASQUERADE: enabled
CONFIG_IP6_NF_TARGET_MASQUERADE: enabled
CONFIG_NETFILTER_XT_TARGET_CHECKSUM: enabled
--- Checkpoint/Restore ---
checkpoint restore: enabled
CONFIG_FHANDLE: enabled
CONFIG_EVENTFD: enabled
CONFIG_EPOLL: enabled
CONFIG_UNIX_DIAG: enabled
CONFIG_INET_DIAG: enabled
CONFIG_PACKET_DIAG: enabled
CONFIG_NETLINK_DIAG: enabled
File capabilities: enabled
Note : Before booting a new kernel, you can check its configuration
usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig
[root@openstack ~]#

3)

I template sono qui:

ls -alh /usr/share/lxc/templates/

total 344K
drwxr-xr-x 2 root root 4.0K Feb 13 17:36 .
drwxr-xr-x 6 root root  100 Feb 13 17:36 ..
-rwxr-xr-x 1 root root  11K Dec  3 20:09 lxc-alpine
-rwxr-xr-x 1 root root  14K Dec  3 20:09 lxc-altlinux
-rwxr-xr-x 1 root root  11K Dec  3 20:09 lxc-archlinux
-rwxr-xr-x 1 root root 9.5K Dec  3 20:09 lxc-busybox
-rwxr-xr-x 1 root root  29K Dec  3 20:09 lxc-centos
-rwxr-xr-x 1 root root  11K Dec  3 20:09 lxc-cirros
-rwxr-xr-x 1 root root  18K Dec  3 20:09 lxc-debian
-rwxr-xr-x 1 root root  18K Dec  3 20:09 lxc-download
-rwxr-xr-x 1 root root  49K Dec  3 20:09 lxc-fedora
-rwxr-xr-x 1 root root  28K Dec  3 20:09 lxc-gentoo
-rwxr-xr-x 1 root root  14K Dec  3 20:09 lxc-openmandriva
-rwxr-xr-x 1 root root  14K Dec  3 20:09 lxc-opensuse
-rwxr-xr-x 1 root root  35K Dec  3 20:09 lxc-oracle
-rwxr-xr-x 1 root root  12K Dec  3 20:09 lxc-plamo
-rwxr-xr-x 1 root root 6.7K Dec  3 20:09 lxc-sshd
-rwxr-xr-x 1 root root  24K Dec  3 20:09 lxc-ubuntu
-rwxr-xr-x 1 root root  12K Dec  3 20:09 lxc-ubuntu-cloud

4)

ESTENDERE LA PARTIZIONE SWAP

Vai qui:

https://help.ubuntu.com/12.04/serverguide/lxc.html#lxc-hostsetup

Using a separate filesystem for the container store

LXC stores container information and (with the default backing store) root filesystems under /var/lib/lxc. Container creation templates also tend to store cached distribution information under /var/cache/lxc.

If you wish to use another filesystem than /var, you can mount a filesystem which has more space into those locations. If you have a disk dedicated for this, you can simply mount it at /var/lib/lxc. If you'd like to use another location, like /srv, you can bind mount it or use a symbolic link. For instance, if /srv is a large mounted filesystem, create and symlink two directories:


sudo mkdir /srv/lxclib /srv/lxccache
sudo rm -rf /var/lib/lxc /var/cache/lxc
sudo ln -s /srv/lxclib /var/lib/lxc
sudo ln -s /srv/lxccache /var/cache/lxc

or, using bind mounts:


sudo mkdir /srv/lxclib /srv/lxccache
sudo sed -i '$a \
/srv/lxclib /var/lib/lxc    none defaults,bind 0 0 \
/srv/lxccache /var/cache/lxc none defaults,bind 0 0' /etc/fstab
sudo mount -a




Prima di creare un container potrebbe essere necessario effettuare delle operazioni. Se ad esempio su Debian si vuole installare un container Ubuntu si deve creare un link symbolico in /usr/bin. Per i dettagli vedi qui

Se si vuole creare un container debian e' facile:

lxc-create -n container_name -t container_template


Lancia il container in background:

 lxc-start -n mydeb -d
# lxc-stop -n mydcb

Altri comandi:
lxc-ls to list your containers
lxc-info to obtain information about a running/stopped container.
lxc-ls --active
 command and get detailed information about the running container.
 In order to login to the container console issue the lxc-console command against a running container name

And finally, all created containers reside in /var/lib/lxc/

https://help.ubuntu.com/12.04/serverguide/lxc.html#lxc-hostsetup



CISCO altro:


Create the Container archive on a Linux Server.
• Copy the archive file to /misc/app_host.
• Unarchive in a rootfs directory.
• Create XML file specifying LXC parameters.
• Run virsh command. 36 Routing Processor 64-bit Host OS Control Plane Admin Plane Third Party

virsh –c lxc+tcp://10.11.12.15:16509 create


Cisco:A KVM can be slightly more portable than an LxC container while an LxC might have a slight performance edge over a KVM.

https://www.cisco.com/c/dam/en/us/products/collateral/routers/asr-1000-series-aggregation-services-routers/q-and-a-c67-737653.pdf




----------------------

----------------------