Linux Made Simple

Linux Made Simple

English | 2015 | 146 Pages | PDF | 50 MB

Get started with Linux from the ground up: we’ve got everything you need to know in Linux Made Simple.
New to Linux? Here’s everything you need to get started.


The kernel

The nerve centre at the heart of your Linux operating system.

The kernel is the beating heart of the system, butwhat is it? The kernel is the software interface tothe computer’s hardware. It communicates with the CPU, memory and other devices on behalf of any software running on the computer. As such, it is the lowest-level component in the software stack, and the most important. If the kernel has a problem, every piece of software running on the computer shares in that problem.

The Linux kernel is a monolithic kernel – all the main OS services run in the kernel. The alternative is a microkernel, where most of the work is done by external processes, with the kernel doing little more than co-ordinating.

While a pure monolithic kernel worked well in the early days, when users compiled a kernel for their hardware, there are so many combinations of hardware nowadays that building them all into the kernel would result in a huge file. So the Linux kernel is now modular, the core functions are in the kernel file (you can see this in /boot as vmlinuz-version) while the optional drivers are built as separate modules in /lib/modules (the .ko files in this directory).

For example, Ubuntu 14.04’s 64-bit kernel is 5MB in size, while there are a further 3,700 modules occupying over 100MB. Only a fraction of these are needed on any particular machine, so it would be insane to try to load them all with the main kernel. Instead, the kernel detects the hardware in use and loads the relevant modules, which become part of the kernel in memory, so it is still monolithic when loaded even when spread across thousands of files. This enables a system to react to changes in
hardware. Plug in a USB memory stick and the usb-storage module is loaded, along with the filesystem module needed to mount it. In a similar way, connect a 3G dongle, and the serial modem drivers are loaded. This is why it is rarely necessary to install new drivers when adding hardware; they’re all there just waiting for you to buy some new toys to plug in. Computers that are run on specific and unchanging hardware, such as servers, usually have a kernel with all the required drivers compiled in and module loading disabled, which adds a small amount of security.

If you are compiling your own kernel, a good rule of thumb is to build in drivers for hardware that is always in use, such as your network interface and hard disk filesystems, and build modules for everything else.

Even more modules

The huge number of modules, most of which are hardware drivers, is one of the strengths of Linux in recent years – so much hardware is supported by default, with no need to download and install drivers from anywhere else. There is still some hardware not covered by in-kernel modules, usually because the code is too new or its licence prevents it being included with the kernel (yes ZFS, we’re looking at you). The drivers for Nvidia cards are the best known examples. Usually known as third-party modules, although Ubuntu also refers to ‘restricted drivers’, these are installed from your package manager if your distro supports them. Otherwise, they have to be compiled from source, which has to be done again each time you update your kernel, because they are tied to the kernel for which they were built.

There have been some efforts to provide a level of automation to this, notably DKMS (Dynamic Kernel Module Support), which automatically recompiles all third-party
modules when a new kernel is installed, making the process of upgrading a kernel almost as seamless as upgrading user applications.

Phrases that you will see bandied about when referring to kernels are “kernel space” and “user space”. Kernel space is memory that can only be accessed by the kernel; no user programs (which means anything but the kernel and its modules) can write here, so a wayward program cannot corrupt the kernel’s operations. User space, on the other hand, can be accessed by any program with the appropriate privileges. This contributes towards the stability and security of Linux, because no program, even running as root, can directly undermine the kernel.


How your Linux box stays looking so tickety-boo.

Theproviding a graphical interface. While the likes ofKDE and Gnome provide the user interface and eyeX Window Systemis the standard basis for candy, it is through X that they communicate with the hardware. For many years, this was a mass of arcane configuration options and required a lengthy configuration file containing things such as modelines that specified the likes of pixel clock rates and sync frequencies.

These days, most systems will run without any configuration file at all. Massive improvements in hardware detection mean that a system with a single video card and single display will ‘just work’. You may need to install extra drivers to get 3D acceleration if you are using, for example, an Nvidia card, but otherwise you just boot the computer and start clicking your mouse. X has a client/server architecture. X itself runs as the server, maintaining the display; client programs then communicate with the server, telling it what to draw and where.


How your computer talks to others.

Networking is core to Linux. Even on a standalonemachine with no connection to a local network, letalone the internet, networking is still used. Many services run on a client/server model, where the server runs in the background waiting for instructions from other programs. Even something as basic as the system logger runs as a networked service, allowing other programs to write to log files. The X graphics system is also networked, with the X server running the desktop and programs, telling it what they want displayed. This is why it is so simple to run X programs on a remote desktop – as far as the system is concerned, there is no major difference between that and opening a window locally.

Running ifconfig will always show at least one interface, called lo with an address of – this is used by the computer for talking to itself, which is regarded as a more sane activity for computers than people. Most other networking is based on TCP/IP, either over wired Ethernet or wireless, but there are other types of network in use. All distros and desktops include good tools for configuring and maintaining TCP/IP networks, from the fairly ubiquitous NetworkManager to individual tools such as Gnome’s network configuration tool or OpenSUSE’s Yast. More recent additions to the networking scene include 3G mobile communications and PAN (Personal Area Network) technologies such as Bluetooth. Using a 3G mobile broadband dongle is usually simple, either using NetworkManager or your desktop’s PPP software. Yes, 3G modems really do work like modems using dialscripts and everything, but without the cacophony of squawks as you connect (younger readers should ignore the last
statement). Most problems with 3G are caused by trying to set them up in a poor signal area rather than with either the hardware or software support in Linux.