[NSE1] Operating System Kernel Design 1

2018

Home Assignment

For this assignment, you’ll have to submit a report of at least 10 pages (aim for 20, but don’t fill it if there is no need to) with the following constraints:

  • at most 2 person per subject
  • members of a group needs to be in the same speciality (all SRS, or all GISTRE)
  • you will need to submit your work in pdf format to gabriel+nse2019@lse.epita.fr with the tag “[NSE1]” and at least your login in the subject line.
  • Remember, every question needs to be answered (don’t hesitate to synchronize the teams before)

Subjects

As a general rule, for all these subject, you need to establish:

  • history and context
  • state of the art

You will also need to cite all your sources and references.

LSM and selinux

How does it works, what is the historical context? Also, present the kernel and userland APIs. Other questions:

  • How to write a custom LSM?
  • selinux: how a policy is specified, and what is the complete data and control path.
  • selinux: how can we audit changes in a policy ?

Netfilter and ipsec

Historical context and evolutions. Explain the complete control/data path.

epbf

How does EBPF works? How can we do userland tracing with it? What changes have allowed the framework to be that much extensible? You may have to write/explain the proof that this is now working.

Network stack in userland

Why do we need this? Explain the difference between the multiple technologies used (DPDK, PF_RING, AF_PACKET…)

Build a relevant use case for these technologies.

nvme device passthrough

NVME is the new SCSI. Why is it here, how it works? How can we do nvme passthrough without SRIOV?

iscsi/scsi hooking

This is a case study. We have machines that we netboot on linux with PXE. All of them are diskless. We need to be able to do the same thing with Windows.

Windows is able to boot in PXE, and use a ISCSI disk as a rootfs. But it is too big to have one volume for each machine.

The goal here is to have only one volume that we COW on each new connection. How can we integrate this is the current linux ISCI stack? We also need to be able to do some kind of garbage collection of least recently used volumes to save space.

2017

  • x86 Registers & System Operations
    • General Registers
    • Control Registers
    • MSR
    • GDT & segment selectors
    • IDT
    • Pagination
  • Kernel Taxonomy
    • Monolithic
    • Micro-kernels
    • Hybrids
    • Unikernels
  • Data Structures in the Linux kernel
  • Processes
    • task state
    • memory organistion
    • discovery of the struct task_struct
    • process tree
  • Metrics for scheduling
  • Scheduling algorithms

[NSE2] Operating System Kernel Design 2