]> git.codelabs.ch Git - muen/linux.git/commitdiff
Merge tag 'affs-for-4.18-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/kdave...
authorLinus Torvalds <torvalds@linux-foundation.org>
Mon, 4 Jun 2018 21:27:09 +0000 (14:27 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Mon, 4 Jun 2018 21:27:09 +0000 (14:27 -0700)
Pull affs fix from David Sterba:
 "A potential memory leak fix for AFFS"

* tag 'affs-for-4.18-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux:
  affs: fix potential memory leak when parsing option 'prefix'

1187 files changed:
Documentation/00-INDEX
Documentation/ABI/stable/sysfs-devices-node
Documentation/ABI/testing/sysfs-kernel-mm-hugepages
Documentation/ABI/testing/sysfs-kernel-mm-ksm
Documentation/ABI/testing/sysfs-kernel-slab
Documentation/admin-guide/bcache.rst [new file with mode: 0644]
Documentation/admin-guide/cgroup-v2.rst [new file with mode: 0644]
Documentation/admin-guide/index.rst
Documentation/admin-guide/kernel-parameters.txt
Documentation/admin-guide/mm/concepts.rst [new file with mode: 0644]
Documentation/admin-guide/mm/hugetlbpage.rst [new file with mode: 0644]
Documentation/admin-guide/mm/idle_page_tracking.rst [new file with mode: 0644]
Documentation/admin-guide/mm/index.rst [new file with mode: 0644]
Documentation/admin-guide/mm/ksm.rst [new file with mode: 0644]
Documentation/admin-guide/mm/numa_memory_policy.rst [new file with mode: 0644]
Documentation/admin-guide/mm/pagemap.rst [new file with mode: 0644]
Documentation/admin-guide/mm/soft-dirty.rst [new file with mode: 0644]
Documentation/admin-guide/mm/transhuge.rst [new file with mode: 0644]
Documentation/admin-guide/mm/userfaultfd.rst [new file with mode: 0644]
Documentation/admin-guide/ramoops.rst
Documentation/arm/Marvell/README
Documentation/bcache.txt [deleted file]
Documentation/block/cmdline-partition.txt
Documentation/block/null_blk.txt
Documentation/cachetlb.txt [deleted file]
Documentation/cgroup-v2.txt [deleted file]
Documentation/circular-buffers.txt [deleted file]
Documentation/clk.txt [deleted file]
Documentation/core-api/cachetlb.rst [new file with mode: 0644]
Documentation/core-api/circular-buffers.rst [new file with mode: 0644]
Documentation/core-api/gfp_mask-from-fs-io.rst [new file with mode: 0644]
Documentation/core-api/index.rst
Documentation/core-api/kernel-api.rst
Documentation/core-api/refcount-vs-atomic.rst
Documentation/crypto/index.rst
Documentation/dev-tools/kasan.rst
Documentation/dev-tools/kselftest.rst
Documentation/devicetree/bindings/hwmon/gpio-fan.txt
Documentation/devicetree/bindings/hwmon/ltc2990.txt [new file with mode: 0644]
Documentation/devicetree/bindings/net/dsa/b53.txt
Documentation/driver-api/clk.rst [new file with mode: 0644]
Documentation/driver-api/device_connection.rst
Documentation/driver-api/gpio/driver.rst
Documentation/driver-api/index.rst
Documentation/driver-api/uio-howto.rst
Documentation/features/core/BPF-JIT/arch-support.txt [deleted file]
Documentation/features/core/cBPF-JIT/arch-support.txt [new file with mode: 0644]
Documentation/features/core/eBPF-JIT/arch-support.txt [new file with mode: 0644]
Documentation/features/core/generic-idle-thread/arch-support.txt
Documentation/features/core/jump-labels/arch-support.txt
Documentation/features/core/tracehook/arch-support.txt
Documentation/features/debug/KASAN/arch-support.txt
Documentation/features/debug/gcov-profile-all/arch-support.txt
Documentation/features/debug/kgdb/arch-support.txt
Documentation/features/debug/kprobes-on-ftrace/arch-support.txt
Documentation/features/debug/kprobes/arch-support.txt
Documentation/features/debug/kretprobes/arch-support.txt
Documentation/features/debug/optprobes/arch-support.txt
Documentation/features/debug/stackprotector/arch-support.txt
Documentation/features/debug/uprobes/arch-support.txt
Documentation/features/debug/user-ret-profiler/arch-support.txt
Documentation/features/io/dma-api-debug/arch-support.txt [deleted file]
Documentation/features/io/dma-contiguous/arch-support.txt
Documentation/features/io/sg-chain/arch-support.txt
Documentation/features/lib/strncasecmp/arch-support.txt [deleted file]
Documentation/features/locking/cmpxchg-local/arch-support.txt
Documentation/features/locking/lockdep/arch-support.txt
Documentation/features/locking/queued-rwlocks/arch-support.txt
Documentation/features/locking/queued-spinlocks/arch-support.txt
Documentation/features/locking/rwsem-optimized/arch-support.txt
Documentation/features/perf/kprobes-event/arch-support.txt
Documentation/features/perf/perf-regs/arch-support.txt
Documentation/features/perf/perf-stackdump/arch-support.txt
Documentation/features/sched/membarrier-sync-core/arch-support.txt
Documentation/features/sched/numa-balancing/arch-support.txt
Documentation/features/scripts/features-refresh.sh [new file with mode: 0755]
Documentation/features/seccomp/seccomp-filter/arch-support.txt
Documentation/features/time/arch-tick-broadcast/arch-support.txt
Documentation/features/time/clockevents/arch-support.txt
Documentation/features/time/context-tracking/arch-support.txt
Documentation/features/time/irq-time-acct/arch-support.txt
Documentation/features/time/modern-timekeeping/arch-support.txt
Documentation/features/time/virt-cpuacct/arch-support.txt
Documentation/features/vm/ELF-ASLR/arch-support.txt
Documentation/features/vm/PG_uncached/arch-support.txt
Documentation/features/vm/THP/arch-support.txt
Documentation/features/vm/TLB/arch-support.txt
Documentation/features/vm/huge-vmap/arch-support.txt
Documentation/features/vm/ioremap_prot/arch-support.txt
Documentation/features/vm/numa-memblock/arch-support.txt
Documentation/features/vm/pte_special/arch-support.txt
Documentation/filesystems/Locking
Documentation/filesystems/proc.txt
Documentation/filesystems/tmpfs.txt
Documentation/filesystems/vfs.txt
Documentation/hwmon/hwmon-kernel-api.txt
Documentation/hwmon/ltc2990
Documentation/i2c/busses/i2c-ocores
Documentation/index.rst
Documentation/ioctl/botching-up-ioctls.txt
Documentation/memory-barriers.txt
Documentation/networking/ppp_generic.txt
Documentation/process/2.Process.rst
Documentation/process/5.Posting.rst
Documentation/process/index.rst
Documentation/process/maintainer-pgp-guide.rst
Documentation/process/submitting-patches.rst
Documentation/scsi/scsi_eh.txt
Documentation/security/index.rst
Documentation/sound/alsa-configuration.rst
Documentation/sound/soc/codec.rst
Documentation/sound/soc/platform.rst
Documentation/sysctl/vm.txt
Documentation/trace/coresight.txt
Documentation/trace/ftrace-uses.rst
Documentation/trace/ftrace.rst
Documentation/translations/ko_KR/memory-barriers.txt
Documentation/vfio.txt
Documentation/vm/00-INDEX
Documentation/vm/active_mm.rst [new file with mode: 0644]
Documentation/vm/active_mm.txt [deleted file]
Documentation/vm/balance [deleted file]
Documentation/vm/balance.rst [new file with mode: 0644]
Documentation/vm/cleancache.rst [new file with mode: 0644]
Documentation/vm/cleancache.txt [deleted file]
Documentation/vm/conf.py [new file with mode: 0644]
Documentation/vm/frontswap.rst [new file with mode: 0644]
Documentation/vm/frontswap.txt [deleted file]
Documentation/vm/highmem.rst [new file with mode: 0644]
Documentation/vm/highmem.txt [deleted file]
Documentation/vm/hmm.rst [new file with mode: 0644]
Documentation/vm/hmm.txt [deleted file]
Documentation/vm/hugetlbfs_reserv.rst [new file with mode: 0644]
Documentation/vm/hugetlbfs_reserv.txt [deleted file]
Documentation/vm/hugetlbpage.txt [deleted file]
Documentation/vm/hwpoison.rst [new file with mode: 0644]
Documentation/vm/hwpoison.txt [deleted file]
Documentation/vm/idle_page_tracking.txt [deleted file]
Documentation/vm/index.rst [new file with mode: 0644]
Documentation/vm/ksm.rst [new file with mode: 0644]
Documentation/vm/ksm.txt [deleted file]
Documentation/vm/mmu_notifier.rst [new file with mode: 0644]
Documentation/vm/mmu_notifier.txt [deleted file]
Documentation/vm/numa [deleted file]
Documentation/vm/numa.rst [new file with mode: 0644]
Documentation/vm/numa_memory_policy.txt [deleted file]
Documentation/vm/overcommit-accounting [deleted file]
Documentation/vm/overcommit-accounting.rst [new file with mode: 0644]
Documentation/vm/page_frags [deleted file]
Documentation/vm/page_frags.rst [new file with mode: 0644]
Documentation/vm/page_migration [deleted file]
Documentation/vm/page_migration.rst [new file with mode: 0644]
Documentation/vm/page_owner.rst [new file with mode: 0644]
Documentation/vm/page_owner.txt [deleted file]
Documentation/vm/pagemap.txt [deleted file]
Documentation/vm/remap_file_pages.rst [new file with mode: 0644]
Documentation/vm/remap_file_pages.txt [deleted file]
Documentation/vm/slub.rst [new file with mode: 0644]
Documentation/vm/slub.txt [deleted file]
Documentation/vm/soft-dirty.txt [deleted file]
Documentation/vm/split_page_table_lock [deleted file]
Documentation/vm/split_page_table_lock.rst [new file with mode: 0644]
Documentation/vm/swap_numa.rst [new file with mode: 0644]
Documentation/vm/swap_numa.txt [deleted file]
Documentation/vm/transhuge.rst [new file with mode: 0644]
Documentation/vm/transhuge.txt [deleted file]
Documentation/vm/unevictable-lru.rst [new file with mode: 0644]
Documentation/vm/unevictable-lru.txt [deleted file]
Documentation/vm/userfaultfd.txt [deleted file]
Documentation/vm/z3fold.rst [new file with mode: 0644]
Documentation/vm/z3fold.txt [deleted file]
Documentation/vm/zsmalloc.rst [new file with mode: 0644]
Documentation/vm/zsmalloc.txt [deleted file]
Documentation/vm/zswap.rst [new file with mode: 0644]
Documentation/vm/zswap.txt [deleted file]
Documentation/x86/x86_64/boot-options.txt
LICENSES/exceptions/Linux-syscall-note
LICENSES/other/Apache-2.0 [new file with mode: 0644]
LICENSES/other/CC-BY-SA-4.0 [new file with mode: 0644]
LICENSES/other/CDDL-1.0 [new file with mode: 0644]
LICENSES/other/Linux-OpenIB [new file with mode: 0644]
LICENSES/other/X11 [new file with mode: 0644]
LICENSES/preferred/GPL-2.0
MAINTAINERS
Makefile
arch/Kconfig
arch/alpha/Kconfig
arch/alpha/include/asm/pci.h
arch/arc/Kconfig
arch/arc/include/asm/Kbuild
arch/arc/include/asm/dma-mapping.h [deleted file]
arch/arc/include/asm/pci.h
arch/arc/mm/dma.c
arch/arm/Kconfig
arch/arm/boot/dts/sun4i-a10.dtsi
arch/arm/boot/dts/sun8i-h3-orangepi-one.dts
arch/arm/boot/dts/sun8i-v3s-licheepi-zero-dock.dts
arch/arm/include/asm/pci.h
arch/arm/kernel/dma.c
arch/arm/kernel/setup.c
arch/arm/kernel/swp_emulate.c
arch/arm/mach-axxia/Kconfig
arch/arm/mach-bcm/Kconfig
arch/arm/mach-ep93xx/core.c
arch/arm/mach-exynos/Kconfig
arch/arm/mach-highbank/Kconfig
arch/arm/mach-ixp4xx/avila-setup.c
arch/arm/mach-ixp4xx/dsmg600-setup.c
arch/arm/mach-ixp4xx/fsg-setup.c
arch/arm/mach-ixp4xx/ixdp425-setup.c
arch/arm/mach-ixp4xx/nas100d-setup.c
arch/arm/mach-ixp4xx/nslu2-setup.c
arch/arm/mach-pxa/palmz72.c
arch/arm/mach-pxa/viper.c
arch/arm/mach-rockchip/Kconfig
arch/arm/mach-rpc/ecard.c
arch/arm/mach-sa1100/simpad.c
arch/arm/mach-shmobile/Kconfig
arch/arm/mach-tegra/Kconfig
arch/arm/mm/Kconfig
arch/arm/mm/dma-mapping-nommu.c
arch/arm/mm/dma-mapping.c
arch/arm64/Kconfig
arch/arm64/boot/dts/hisilicon/hi6220-hikey.dts
arch/arm64/include/asm/atomic_lse.h
arch/arm64/include/asm/pci.h
arch/arm64/kernel/arm64ksyms.c
arch/arm64/lib/tishift.S
arch/arm64/mm/dma-mapping.c
arch/arm64/mm/fault.c
arch/arm64/mm/mmu.c
arch/c6x/Kconfig
arch/c6x/include/asm/Kbuild
arch/c6x/include/asm/dma-mapping.h [deleted file]
arch/c6x/include/asm/setup.h
arch/c6x/kernel/Makefile
arch/c6x/kernel/dma.c [deleted file]
arch/c6x/mm/dma-coherent.c
arch/h8300/include/asm/pci.h
arch/hexagon/Kconfig
arch/hexagon/kernel/dma.c
arch/ia64/Kconfig
arch/ia64/hp/common/sba_iommu.c
arch/ia64/hp/sim/simserial.c
arch/ia64/include/asm/pci.h
arch/ia64/kernel/dma-mapping.c
arch/ia64/kernel/palinfo.c
arch/ia64/kernel/perfmon.c
arch/ia64/kernel/salinfo.c
arch/ia64/kernel/setup.c
arch/ia64/sn/kernel/io_common.c
arch/ia64/sn/kernel/sn2/prominfo_proc.c
arch/ia64/sn/kernel/sn2/sn_proc_fs.c
arch/m68k/include/asm/pci.h
arch/m68k/kernel/setup_mm.c
arch/microblaze/Kconfig
arch/microblaze/include/asm/pci.h
arch/microblaze/kernel/dma.c
arch/mips/Kconfig
arch/mips/cavium-octeon/Kconfig
arch/mips/include/asm/pci.h
arch/mips/kernel/process.c
arch/mips/kernel/ptrace.c
arch/mips/kernel/ptrace32.c
arch/mips/loongson64/Kconfig
arch/mips/mm/dma-default.c
arch/mips/netlogic/Kconfig
arch/mips/pci/ops-pmcmsp.c
arch/mips/sibyte/common/bus_watcher.c
arch/nds32/Kconfig
arch/nds32/Kconfig.cpu
arch/nds32/Makefile
arch/nds32/include/asm/Kbuild
arch/nds32/include/asm/bitfield.h
arch/nds32/include/asm/cacheflush.h
arch/nds32/include/asm/dma-mapping.h [deleted file]
arch/nds32/include/asm/io.h
arch/nds32/include/asm/page.h
arch/nds32/include/asm/pgtable.h
arch/nds32/kernel/dma.c
arch/nds32/kernel/ex-entry.S
arch/nds32/kernel/head.S
arch/nds32/kernel/setup.c
arch/nds32/kernel/stacktrace.c
arch/nds32/kernel/vdso.c
arch/nds32/lib/copy_page.S
arch/nds32/mm/alignment.c
arch/nds32/mm/cacheflush.c
arch/nds32/mm/init.c
arch/openrisc/kernel/dma.c
arch/parisc/Kconfig
arch/parisc/include/asm/pci.h
arch/parisc/kernel/pci-dma.c
arch/parisc/kernel/pdc_chassis.c
arch/parisc/kernel/setup.c
arch/powerpc/Kconfig
arch/powerpc/include/asm/kvm_book3s.h
arch/powerpc/include/asm/pci.h
arch/powerpc/kernel/asm-offsets.c
arch/powerpc/kernel/cpu_setup_power.S
arch/powerpc/kernel/dma.c
arch/powerpc/kernel/dt_cpu_ftrs.c
arch/powerpc/kernel/eeh.c
arch/powerpc/kernel/rtas-proc.c
arch/powerpc/kvm/book3s_64_mmu_radix.c
arch/powerpc/kvm/book3s_hv.c
arch/powerpc/kvm/book3s_hv_rmhandlers.S
arch/powerpc/kvm/book3s_xive_template.c
arch/powerpc/platforms/Kconfig.cputype
arch/powerpc/platforms/cell/spufs/sched.c
arch/riscv/Kconfig
arch/riscv/include/asm/dma-mapping.h [new file with mode: 0644]
arch/riscv/include/asm/pci.h
arch/riscv/kernel/setup.c
arch/s390/Kconfig
arch/s390/include/asm/pci.h
arch/s390/kernel/sysinfo.c
arch/s390/kvm/vsie.c
arch/s390/pci/pci_dma.c
arch/s390/purgatory/Makefile
arch/sh/Kconfig
arch/sh/drivers/dma/dma-api.c
arch/sh/include/asm/pci.h
arch/sh/kernel/dma-nommu.c
arch/sh/mm/consistent.c
arch/sparc/Kconfig
arch/sparc/include/asm/iommu-common.h [new file with mode: 0644]
arch/sparc/include/asm/iommu_64.h
arch/sparc/include/asm/pci_32.h
arch/sparc/include/asm/pci_64.h
arch/sparc/include/uapi/asm/jsflash.h [deleted file]
arch/sparc/kernel/Makefile
arch/sparc/kernel/dma.c [deleted file]
arch/sparc/kernel/iommu-common.c [new file with mode: 0644]
arch/sparc/kernel/iommu.c
arch/sparc/kernel/ioport.c
arch/sparc/kernel/ldc.c
arch/sparc/kernel/pci_sun4v.c
arch/um/drivers/ubd_kern.c
arch/unicore32/Kconfig
arch/unicore32/mm/Kconfig
arch/x86/Kconfig
arch/x86/entry/syscalls/syscall_32.tbl
arch/x86/entry/syscalls/syscall_64.tbl
arch/x86/include/asm/dma-mapping.h
arch/x86/include/asm/pci.h
arch/x86/kernel/apm_32.c
arch/x86/kernel/cpu/common.c
arch/x86/kernel/pci-dma.c
arch/x86/kvm/cpuid.c
arch/x86/kvm/hyperv.c
arch/x86/kvm/lapic.c
arch/x86/kvm/x86.c
arch/xtensa/Kconfig
arch/xtensa/include/asm/pci.h
arch/xtensa/kernel/pci-dma.c
arch/xtensa/platforms/iss/console.c
block/bfq-cgroup.c
block/bfq-iosched.c
block/bfq-iosched.h
block/bio-integrity.c
block/bio.c
block/blk-core.c
block/blk-integrity.c
block/blk-lib.c
block/blk-merge.c
block/blk-mq-debugfs.c
block/blk-mq-sched.c
block/blk-mq-sched.h
block/blk-mq-sysfs.c
block/blk-mq-tag.c
block/blk-mq.c
block/blk-mq.h
block/blk-stat.c
block/blk-stat.h
block/blk-sysfs.c
block/blk-throttle.c
block/blk-timeout.c
block/blk-wbt.c
block/blk-wbt.h
block/blk-zoned.c
block/blk.h
block/bounce.c
block/bsg-lib.c
block/bsg.c
block/cfq-iosched.c
block/deadline-iosched.c
block/elevator.c
block/genhd.c
block/kyber-iosched.c
block/mq-deadline.c
block/partition-generic.c
block/scsi_ioctl.c
crypto/af_alg.c
crypto/algif_aead.c
crypto/algif_hash.c
crypto/algif_rng.c
crypto/algif_skcipher.c
crypto/proc.c
drivers/acpi/ac.c
drivers/acpi/battery.c
drivers/acpi/button.c
drivers/amba/bus.c
drivers/ata/libata-eh.c
drivers/atm/zatm.c
drivers/base/dma-mapping.c
drivers/base/node.c
drivers/base/platform.c
drivers/base/regmap/regmap-mmio.c
drivers/base/regmap/regmap-slimbus.c
drivers/bcma/driver_mips.c
drivers/bcma/main.c
drivers/block/DAC960.c
drivers/block/DAC960.h
drivers/block/aoe/aoeblk.c
drivers/block/aoe/aoecmd.c
drivers/block/brd.c
drivers/block/drbd/drbd_bitmap.c
drivers/block/drbd/drbd_debugfs.c
drivers/block/drbd/drbd_int.h
drivers/block/drbd/drbd_main.c
drivers/block/drbd/drbd_proc.c
drivers/block/drbd/drbd_receiver.c
drivers/block/drbd/drbd_req.c
drivers/block/drbd/drbd_req.h
drivers/block/floppy.c
drivers/block/loop.c
drivers/block/loop.h
drivers/block/mtip32xx/mtip32xx.c
drivers/block/nbd.c
drivers/block/null_blk.c
drivers/block/paride/pd.c
drivers/block/pktcdvd.c
drivers/block/ps3disk.c
drivers/block/ps3vram.c
drivers/block/rbd.c
drivers/block/rsxx/core.c
drivers/block/sx8.c
drivers/block/virtio_blk.c
drivers/block/xen-blkback/blkback.c
drivers/block/xen-blkback/xenbus.c
drivers/block/xen-blkfront.c
drivers/cdrom/cdrom.c
drivers/char/apm-emulation.c
drivers/char/ds1620.c
drivers/char/efirtc.c
drivers/char/misc.c
drivers/char/nvram.c
drivers/char/pcmcia/synclink_cs.c
drivers/char/random.c
drivers/char/rtc.c
drivers/char/toshiba.c
drivers/connector/connector.c
drivers/crypto/inside-secure/safexcel.c
drivers/dma/qcom/hidma_mgmt.c
drivers/firmware/qcom_scm-32.c
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
drivers/gpu/drm/bridge/synopsys/dw-hdmi.c
drivers/gpu/drm/drm_dp_helper.c
drivers/gpu/drm/i915/i915_query.c
drivers/gpu/drm/i915/intel_lvds.c
drivers/gpu/drm/meson/meson_dw_hdmi.c
drivers/gpu/drm/omapdrm/dss/sdi.c
drivers/gpu/drm/rcar-du/rcar_lvds.c
drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
drivers/gpu/drm/vmwgfx/vmwgfx_fb.c
drivers/gpu/drm/vmwgfx/vmwgfx_msg.c
drivers/gpu/drm/vmwgfx/vmwgfx_msg.h
drivers/gpu/host1x/bus.c
drivers/hwmon/Kconfig
drivers/hwmon/asus_atk0110.c
drivers/hwmon/fschmd.c
drivers/hwmon/hwmon.c
drivers/hwmon/k10temp.c
drivers/hwmon/ltc2990.c
drivers/hwmon/mc13783-adc.c
drivers/hwtracing/intel_th/msu.c
drivers/hwtracing/stm/core.c
drivers/i2c/busses/i2c-ocores.c
drivers/ide/ide-atapi.c
drivers/ide/ide-cd.c
drivers/ide/ide-cd_ioctl.c
drivers/ide/ide-devsets.c
drivers/ide/ide-disk.c
drivers/ide/ide-disk_proc.c
drivers/ide/ide-dma.c
drivers/ide/ide-floppy_proc.c
drivers/ide/ide-ioctls.c
drivers/ide/ide-lib.c
drivers/ide/ide-park.c
drivers/ide/ide-pm.c
drivers/ide/ide-probe.c
drivers/ide/ide-proc.c
drivers/ide/ide-tape.c
drivers/ide/ide-taskfile.c
drivers/iio/adc/Kconfig
drivers/iio/adc/ad7793.c
drivers/iio/adc/at91-sama5d2_adc.c
drivers/iio/adc/stm32-dfsdm-adc.c
drivers/iio/buffer/industrialio-buffer-dma.c
drivers/iio/buffer/kfifo_buf.c
drivers/iio/common/hid-sensors/hid-sensor-trigger.c
drivers/infiniband/core/cache.c
drivers/infiniband/hw/bnxt_re/main.c
drivers/infiniband/hw/bnxt_re/qplib_fp.c
drivers/infiniband/hw/bnxt_re/qplib_fp.h
drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
drivers/infiniband/hw/bnxt_re/qplib_rcfw.h
drivers/infiniband/ulp/srpt/Kconfig
drivers/input/misc/hp_sdc_rtc.c
drivers/input/mouse/elan_i2c_smbus.c
drivers/input/mouse/synaptics.c
drivers/iommu/Kconfig
drivers/isdn/capi/capi.c
drivers/isdn/capi/capidrv.c
drivers/isdn/capi/kcapi.c
drivers/isdn/capi/kcapi_proc.c
drivers/isdn/gigaset/capi.c
drivers/isdn/hardware/avm/avmcard.h
drivers/isdn/hardware/avm/b1.c
drivers/isdn/hardware/avm/b1dma.c
drivers/isdn/hardware/avm/b1isa.c
drivers/isdn/hardware/avm/b1pci.c
drivers/isdn/hardware/avm/b1pcmcia.c
drivers/isdn/hardware/avm/c4.c
drivers/isdn/hardware/avm/t1isa.c
drivers/isdn/hardware/avm/t1pci.c
drivers/isdn/hardware/eicon/capimain.c
drivers/isdn/hardware/eicon/diva.c
drivers/isdn/hardware/eicon/diva.h
drivers/isdn/hardware/eicon/diva_didd.c
drivers/isdn/hardware/eicon/divasi.c
drivers/isdn/hardware/eicon/divasmain.c
drivers/isdn/hysdn/hycapi.c
drivers/isdn/mISDN/socket.c
drivers/lightnvm/core.c
drivers/lightnvm/pblk-cache.c
drivers/lightnvm/pblk-core.c
drivers/lightnvm/pblk-gc.c
drivers/lightnvm/pblk-init.c
drivers/lightnvm/pblk-map.c
drivers/lightnvm/pblk-rb.c
drivers/lightnvm/pblk-read.c
drivers/lightnvm/pblk-recovery.c
drivers/lightnvm/pblk-rl.c
drivers/lightnvm/pblk-sysfs.c
drivers/lightnvm/pblk-write.c
drivers/lightnvm/pblk.h
drivers/macintosh/via-pmu.c
drivers/md/bcache/bcache.h
drivers/md/bcache/bset.c
drivers/md/bcache/bset.h
drivers/md/bcache/btree.c
drivers/md/bcache/io.c
drivers/md/bcache/request.c
drivers/md/bcache/super.c
drivers/md/bcache/sysfs.c
drivers/md/bcache/util.c
drivers/md/bcache/util.h
drivers/md/dm-bio-prison-v1.c
drivers/md/dm-bio-prison-v2.c
drivers/md/dm-cache-target.c
drivers/md/dm-core.h
drivers/md/dm-crypt.c
drivers/md/dm-integrity.c
drivers/md/dm-io.c
drivers/md/dm-kcopyd.c
drivers/md/dm-log-userspace-base.c
drivers/md/dm-mpath.c
drivers/md/dm-region-hash.c
drivers/md/dm-rq.c
drivers/md/dm-snap.c
drivers/md/dm-thin.c
drivers/md/dm-verity-fec.c
drivers/md/dm-verity-fec.h
drivers/md/dm-zoned-target.c
drivers/md/dm.c
drivers/md/md-faulty.c
drivers/md/md-linear.c
drivers/md/md-multipath.c
drivers/md/md-multipath.h
drivers/md/md.c
drivers/md/md.h
drivers/md/raid0.c
drivers/md/raid1.c
drivers/md/raid1.h
drivers/md/raid10.c
drivers/md/raid10.h
drivers/md/raid5-cache.c
drivers/md/raid5-ppl.c
drivers/md/raid5.c
drivers/md/raid5.h
drivers/media/pci/saa7164/saa7164-core.c
drivers/media/pci/zoran/videocodec.c
drivers/memstick/core/ms_block.c
drivers/memstick/core/mspro_block.c
drivers/message/fusion/mptbase.c
drivers/message/fusion/mptsas.c
drivers/mfd/mc13xxx-core.c
drivers/misc/sgi-gru/gruprocfs.c
drivers/mmc/core/block.c
drivers/mmc/core/queue.c
drivers/mmc/core/sdio_uart.c
drivers/mmc/host/sdhci-iproc.c
drivers/mtd/devices/Kconfig
drivers/mtd/devices/m25p80.c
drivers/mtd/mtd_blkdevs.c
drivers/mtd/mtdcore.c
drivers/net/bonding/bond_procfs.c
drivers/net/dsa/b53/b53_common.c
drivers/net/dsa/b53/b53_mdio.c
drivers/net/dsa/b53/b53_priv.h
drivers/net/ethernet/amd/pcnet32.c
drivers/net/ethernet/cisco/enic/enic_main.c
drivers/net/ethernet/emulex/benet/be_main.c
drivers/net/ethernet/freescale/fec_main.c
drivers/net/ethernet/freescale/fec_ptp.c
drivers/net/ethernet/ibm/ibmvnic.c
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
drivers/net/ethernet/mellanox/mlx4/icm.c
drivers/net/ethernet/mellanox/mlx4/intf.c
drivers/net/ethernet/mellanox/mlx4/qp.c
drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
drivers/net/ethernet/mellanox/mlx5/core/fpga/ipsec.c
drivers/net/ethernet/mellanox/mlxsw/spectrum.c
drivers/net/ethernet/natsemi/sonic.c
drivers/net/ethernet/qlogic/qed/qed_cxt.c
drivers/net/ethernet/sfc/efx.c
drivers/net/ethernet/sfc/falcon/efx.c
drivers/net/ethernet/socionext/netsec.c
drivers/net/ethernet/ti/davinci_emac.c
drivers/net/hamradio/bpqether.c
drivers/net/hamradio/scc.c
drivers/net/hamradio/yam.c
drivers/net/phy/bcm-cygnus.c
drivers/net/phy/bcm-phy-lib.c
drivers/net/phy/bcm-phy-lib.h
drivers/net/phy/bcm7xxx.c
drivers/net/ppp/ppp_generic.c
drivers/net/ppp/pppoe.c
drivers/net/ppp/pptp.c
drivers/net/tun.c
drivers/net/usb/cdc_mbim.c
drivers/net/usb/qmi_wwan.c
drivers/net/virtio_net.c
drivers/net/wireless/atmel/atmel.c
drivers/net/wireless/intel/iwlwifi/pcie/trans.c
drivers/net/wireless/intersil/hostap/hostap_ap.c
drivers/net/wireless/intersil/hostap/hostap_hw.c
drivers/net/wireless/intersil/hostap/hostap_proc.c
drivers/net/wireless/mac80211_hwsim.c
drivers/net/wireless/ralink/rt2x00/rt2x00queue.c
drivers/net/wireless/ray_cs.c
drivers/nubus/proc.c
drivers/nvme/host/Kconfig
drivers/nvme/host/core.c
drivers/nvme/host/fabrics.c
drivers/nvme/host/fabrics.h
drivers/nvme/host/fc.c
drivers/nvme/host/nvme.h
drivers/nvme/host/pci.c
drivers/nvme/host/rdma.c
drivers/nvme/host/trace.h
drivers/nvme/target/Kconfig
drivers/nvme/target/Makefile
drivers/nvme/target/admin-cmd.c
drivers/nvme/target/core.c
drivers/nvme/target/discovery.c
drivers/nvme/target/fabrics-cmd.c
drivers/nvme/target/fc.c
drivers/nvme/target/io-cmd-bdev.c [new file with mode: 0644]
drivers/nvme/target/io-cmd-file.c [new file with mode: 0644]
drivers/nvme/target/io-cmd.c [deleted file]
drivers/nvme/target/loop.c
drivers/nvme/target/nvmet.h
drivers/of/device.c
drivers/of/of_reserved_mem.c
drivers/parisc/Kconfig
drivers/parisc/ccio-dma.c
drivers/parisc/sba_iommu.c
drivers/pci/Kconfig
drivers/pci/bus.c
drivers/pci/pci-driver.c
drivers/pci/proc.c
drivers/platform/chrome/Kconfig
drivers/platform/chrome/Makefile
drivers/platform/chrome/chromeos_laptop.c
drivers/platform/chrome/chromeos_tbmc.c [new file with mode: 0644]
drivers/platform/chrome/cros_ec_lightbar.c
drivers/platform/chrome/cros_ec_lpc.c
drivers/platform/chrome/cros_ec_sysfs.c
drivers/platform/chrome/cros_ec_vbc.c
drivers/platform/x86/asus-wmi.c
drivers/platform/x86/toshiba_acpi.c
drivers/pnp/pnpbios/proc.c
drivers/rtc/rtc-proc.c
drivers/s390/block/dasd.c
drivers/s390/block/dasd_proc.c
drivers/s390/char/tape_proc.c
drivers/sbus/char/Kconfig
drivers/sbus/char/Makefile
drivers/sbus/char/jsflash.c [deleted file]
drivers/scsi/gdth.c
drivers/scsi/libiscsi.c
drivers/scsi/megaraid.c
drivers/scsi/megaraid.h
drivers/scsi/megaraid/megaraid_sas_base.c
drivers/scsi/mvumi.c
drivers/scsi/osd/osd_initiator.c
drivers/scsi/osst.c
drivers/scsi/qla4xxx/ql4_os.c
drivers/scsi/scsi_error.c
drivers/scsi/scsi_lib.c
drivers/scsi/scsi_transport_fc.c
drivers/scsi/scsi_transport_iscsi.c
drivers/scsi/scsi_transport_sas.c
drivers/scsi/scsi_transport_srp.c
drivers/scsi/sg.c
drivers/scsi/st.c
drivers/scsi/ufs/ufshcd.c
drivers/soc/lantiq/gphy.c
drivers/spi/Kconfig
drivers/spi/Makefile
drivers/spi/internals.h [new file with mode: 0644]
drivers/spi/spi-bcm-qspi.c
drivers/spi/spi-bcm53xx.c [deleted file]
drivers/spi/spi-bcm53xx.h [deleted file]
drivers/spi/spi-bcm63xx-hsspi.c
drivers/spi/spi-cadence.c
drivers/spi/spi-fsl-lpspi.c
drivers/spi/spi-imx.c
drivers/spi/spi-mem.c [new file with mode: 0644]
drivers/spi/spi-meson-spicc.c
drivers/spi/spi-mpc52xx.c
drivers/spi/spi-mxs.c
drivers/spi/spi-omap2-mcspi.c
drivers/spi/spi-pxa2xx-dma.c
drivers/spi/spi-pxa2xx.c
drivers/spi/spi-pxa2xx.h
drivers/spi/spi-s3c64xx.c
drivers/spi/spi-sh-msiof.c
drivers/spi/spi-stm32.c
drivers/spi/spi-ti-qspi.c
drivers/spi/spi-zynqmp-gqspi.c
drivers/spi/spi.c
drivers/ssb/Kconfig
drivers/staging/comedi/drivers/serial2002.c
drivers/staging/comedi/proc.c
drivers/staging/fwserial/fwserial.c
drivers/staging/ipx/af_ipx.c
drivers/staging/ipx/ipx_proc.c
drivers/staging/lustre/lnet/Kconfig
drivers/staging/ncpfs/dir.c
drivers/staging/rtl8192u/r8192U_core.c
drivers/target/target_core_iblock.c
drivers/target/target_core_iblock.h
drivers/target/target_core_pscsi.c
drivers/thunderbolt/icm.c
drivers/tty/amiserial.c
drivers/tty/cyclades.c
drivers/tty/serial/serial_core.c
drivers/tty/synclink.c
drivers/tty/synclink_gt.c
drivers/tty/synclinkmp.c
drivers/tty/tty_ldisc.c
drivers/usb/gadget/udc/at91_udc.c
drivers/usb/gadget/udc/fsl_udc_core.c
drivers/usb/gadget/udc/goku_udc.c
drivers/usb/gadget/udc/omap_udc.c
drivers/usb/serial/usb-serial.c
drivers/vfio/vfio_iommu_type1.c
drivers/vfio/virqfd.c
drivers/vhost/net.c
drivers/vhost/vhost.c
drivers/video/fbdev/core/fbmem.c
drivers/video/fbdev/via/viafbdev.c
drivers/w1/w1_io.c
drivers/zorro/proc.c
fs/9p/vfs_inode.c
fs/Kconfig
fs/adfs/dir.c
fs/afs/proc.c
fs/afs/security.c
fs/afs/vlclient.c
fs/aio.c
fs/bfs/dir.c
fs/block_dev.c
fs/btrfs/extent_io.c
fs/cachefiles/proc.c
fs/cifs/Kconfig
fs/cifs/cifs_debug.c
fs/cifs/dir.c
fs/cramfs/inode.c
fs/dax.c
fs/dcache.c
fs/direct-io.c
fs/eventfd.c
fs/eventpoll.c
fs/exofs/ore.c
fs/exofs/super.c
fs/ext4/ext4.h
fs/ext4/mballoc.c
fs/ext4/sysfs.c
fs/f2fs/sysfs.c
fs/fat/namei_msdos.c
fs/fat/namei_vfat.c
fs/fcntl.c
fs/filesystems.c
fs/freevxfs/vxfs_lookup.c
fs/fscache/histogram.c
fs/fscache/internal.h
fs/fscache/proc.c
fs/fscache/stats.c
fs/hfs/dir.c
fs/hfs/inode.c
fs/hfsplus/dir.c
fs/inode.c
fs/internal.h
fs/jfs/jfs_debug.c
fs/jfs/jfs_debug.h
fs/jfs/jfs_logmgr.c
fs/jfs/jfs_metapage.c
fs/jfs/jfs_txnmgr.c
fs/jfs/jfs_xtree.c
fs/locks.c
fs/minix/namei.c
fs/namei.c
fs/nfs/client.c
fs/nfsd/blocklayout.c
fs/ocfs2/cluster/heartbeat.c
fs/omfs/dir.c
fs/open.c
fs/openpromfs/inode.c
fs/orangefs/namei.c
fs/pipe.c
fs/proc/array.c
fs/proc/base.c
fs/proc/cmdline.c
fs/proc/consoles.c
fs/proc/devices.c
fs/proc/fd.c
fs/proc/generic.c
fs/proc/internal.h
fs/proc/interrupts.c
fs/proc/loadavg.c
fs/proc/meminfo.c
fs/proc/namespaces.c
fs/proc/nommu.c
fs/proc/proc_net.c
fs/proc/proc_sysctl.c
fs/proc/proc_tty.c
fs/proc/self.c
fs/proc/softirqs.c
fs/proc/task_mmu.c
fs/proc/thread_self.c
fs/proc/uptime.c
fs/proc/version.c
fs/qnx4/namei.c
fs/qnx6/namei.c
fs/read_write.c
fs/reiserfs/procfs.c
fs/romfs/super.c
fs/select.c
fs/seq_file.c
fs/super.c
fs/sysv/namei.c
fs/timerfd.c
fs/ubifs/dir.c
fs/xattr.c
fs/xfs/xfs_aops.c
fs/xfs/xfs_aops.h
fs/xfs/xfs_iops.c
fs/xfs/xfs_stats.c
fs/xfs/xfs_super.c
include/asm-generic/dma-mapping.h
include/asm-generic/pci.h
include/crypto/if_alg.h
include/drm/bridge/dw_hdmi.h
include/linux/aio.h
include/linux/atalk.h
include/linux/bio.h
include/linux/blk-mq.h
include/linux/blk_types.h
include/linux/blkdev.h
include/linux/bpf_verifier.h
include/linux/bsg-lib.h
include/linux/bsg.h
include/linux/compat.h
include/linux/device.h
include/linux/dma-debug.h
include/linux/dma-direct.h
include/linux/dma-mapping.h
include/linux/dma-noncoherent.h [new file with mode: 0644]
include/linux/elevator.h
include/linux/fs.h
include/linux/gfp.h
include/linux/hmm.h
include/linux/ide.h
include/linux/iio/buffer_impl.h
include/linux/iommu-common.h [deleted file]
include/linux/iommu-helper.h
include/linux/isdn/capilli.h
include/linux/libata.h
include/linux/lightnvm.h
include/linux/mempool.h
include/linux/memremap.h
include/linux/mfd/cros_ec.h
include/linux/mfd/mc13xxx.h
include/linux/mmu_notifier.h
include/linux/net.h
include/linux/node.h
include/linux/nvme.h
include/linux/of_device.h
include/linux/pci.h
include/linux/pktcdvd.h
include/linux/platform_device.h
include/linux/poll.h
include/linux/proc_fs.h
include/linux/regmap.h
include/linux/sbitmap.h
include/linux/sched/mm.h
include/linux/seq_file_net.h
include/linux/skbuff.h
include/linux/spi/spi-mem.h [new file with mode: 0644]
include/linux/spi/spi.h
include/linux/sunrpc/rpc_pipe_fs.h
include/linux/swait.h
include/linux/swap.h
include/linux/syscalls.h
include/linux/tty.h
include/linux/tty_driver.h
include/linux/xattr.h
include/net/ax25.h
include/net/bluetooth/bluetooth.h
include/net/busy_poll.h
include/net/ip6_fib.h
include/net/ip_vs.h
include/net/iucv/af_iucv.h
include/net/netrom.h
include/net/phonet/pn_dev.h
include/net/ping.h
include/net/raw.h
include/net/rose.h
include/net/sctp/sctp.h
include/net/sock.h
include/net/tcp.h
include/net/udp.h
include/scsi/osd_initiator.h
include/scsi/scsi_host.h
include/trace/events/sched.h
include/uapi/asm-generic/unistd.h
include/uapi/linux/aio_abi.h
include/uapi/linux/bpf.h
include/uapi/linux/nl80211.h
include/uapi/linux/ppp-ioctl.h
include/uapi/linux/types.h
init/main.c
ipc/shm.c
kernel/bpf/verifier.c
kernel/cgroup/cgroup-internal.h
kernel/cgroup/cgroup-v1.c
kernel/cgroup/cgroup.c
kernel/dma.c
kernel/exec_domain.c
kernel/irq/proc.c
kernel/kthread.c
kernel/locking/lockdep_proc.c
kernel/power/swap.c
kernel/resource.c
kernel/sched/core.c
kernel/sched/deadline.c
kernel/sched/debug.c
kernel/sched/sched.h
kernel/sched/stats.c
kernel/sched/topology.c
kernel/sys.c
kernel/sys_ni.c
kernel/time/timer_list.c
kernel/trace/trace.c
kernel/trace/trace.h
kernel/trace/trace_events_trigger.c
lib/Kconfig
lib/Kconfig.debug
lib/Makefile
lib/dma-debug.c
lib/dma-direct.c
lib/dma-noncoherent.c [new file with mode: 0644]
lib/iommu-common.c [deleted file]
lib/iommu-helper.c
lib/radix-tree.c
lib/sbitmap.c
lib/swiotlb.c
mm/Kconfig
mm/backing-dev.c
mm/cleancache.c
mm/frontswap.c
mm/hmm.c
mm/huge_memory.c
mm/hugetlb.c
mm/kasan/kasan.c
mm/ksm.c
mm/memcontrol.c
mm/memory_hotplug.c
mm/mempool.c
mm/mmap.c
mm/page_alloc.c
mm/rmap.c
mm/swapfile.c
mm/util.c
mm/vmalloc.c
mm/vmscan.c
mm/vmstat.c
net/8021q/vlanproc.c
net/9p/Kconfig
net/9p/trans_fd.c
net/appletalk/aarp.c
net/appletalk/atalk_proc.c
net/appletalk/ddp.c
net/atm/br2684.c
net/atm/clip.c
net/atm/common.c
net/atm/common.h
net/atm/lec.c
net/atm/proc.c
net/atm/pvc.c
net/atm/svc.c
net/ax25/af_ax25.c
net/ax25/ax25_route.c
net/ax25/ax25_uid.c
net/batman-adv/multicast.c
net/batman-adv/translation-table.c
net/bluetooth/af_bluetooth.c
net/bluetooth/bnep/sock.c
net/bluetooth/cmtp/capi.c
net/bluetooth/cmtp/sock.c
net/bluetooth/hci_sock.c
net/bluetooth/hidp/sock.c
net/bluetooth/l2cap_sock.c
net/bluetooth/rfcomm/sock.c
net/bluetooth/sco.c
net/bridge/netfilter/ebtables.c
net/caif/caif_socket.c
net/can/bcm.c
net/can/proc.c
net/can/raw.c
net/core/datagram.c
net/core/dev.c
net/core/neighbour.c
net/core/net-procfs.c
net/core/net-sysfs.c
net/core/sock.c
net/dccp/dccp.h
net/dccp/ipv4.c
net/dccp/ipv6.c
net/dccp/proto.c
net/decnet/af_decnet.c
net/decnet/dn_dev.c
net/decnet/dn_neigh.c
net/decnet/dn_route.c
net/ieee802154/socket.c
net/ipv4/af_inet.c
net/ipv4/arp.c
net/ipv4/fib_frontend.c
net/ipv4/fib_trie.c
net/ipv4/igmp.c
net/ipv4/ip_sockglue.c
net/ipv4/ip_tunnel.c
net/ipv4/ipconfig.c
net/ipv4/ipmr.c
net/ipv4/ipmr_base.c
net/ipv4/ping.c
net/ipv4/proc.c
net/ipv4/raw.c
net/ipv4/route.c
net/ipv4/tcp.c
net/ipv4/tcp_ipv4.c
net/ipv4/udp.c
net/ipv4/udplite.c
net/ipv6/addrconf.c
net/ipv6/af_inet6.c
net/ipv6/anycast.c
net/ipv6/ip6_fib.c
net/ipv6/ip6_flowlabel.c
net/ipv6/ip6_tunnel.c
net/ipv6/ip6mr.c
net/ipv6/mcast.c
net/ipv6/ping.c
net/ipv6/proc.c
net/ipv6/raw.c
net/ipv6/route.c
net/ipv6/seg6_iptunnel.c
net/ipv6/sit.c
net/ipv6/tcp_ipv6.c
net/ipv6/udp.c
net/ipv6/udplite.c
net/ipv6/xfrm6_policy.c
net/iucv/af_iucv.c
net/kcm/kcmproc.c
net/kcm/kcmsock.c
net/key/af_key.c
net/l2tp/l2tp_ip.c
net/l2tp/l2tp_ip6.c
net/l2tp/l2tp_ppp.c
net/llc/af_llc.c
net/llc/llc_proc.c
net/mac80211/mesh_plink.c
net/ncsi/ncsi-netlink.c
net/netfilter/ipvs/ip_vs_app.c
net/netfilter/ipvs/ip_vs_conn.c
net/netfilter/ipvs/ip_vs_ctl.c
net/netfilter/nf_conntrack_expect.c
net/netfilter/nf_conntrack_standalone.c
net/netfilter/nf_log.c
net/netfilter/nf_synproxy_core.c
net/netfilter/nf_tables_api.c
net/netfilter/nf_tables_core.c
net/netfilter/nfnetlink_acct.c
net/netfilter/nfnetlink_cthelper.c
net/netfilter/nfnetlink_log.c
net/netfilter/nfnetlink_queue.c
net/netfilter/nft_ct.c
net/netfilter/nft_limit.c
net/netfilter/nft_meta.c
net/netfilter/x_tables.c
net/netfilter/xt_hashlimit.c
net/netlink/af_netlink.c
net/netrom/af_netrom.c
net/netrom/nr_route.c
net/nfc/llcp_sock.c
net/nfc/rawsock.c
net/packet/af_packet.c
net/phonet/pn_dev.c
net/phonet/socket.c
net/qrtr/qrtr.c
net/rds/Kconfig
net/rose/af_rose.c
net/rose/rose_route.c
net/rxrpc/af_rxrpc.c
net/rxrpc/ar-internal.h
net/rxrpc/net_ns.c
net/rxrpc/proc.c
net/sched/cls_api.c
net/sched/cls_flower.c
net/sched/sch_api.c
net/sctp/ipv6.c
net/sctp/objcnt.c
net/sctp/proc.c
net/sctp/protocol.c
net/sctp/socket.c
net/socket.c
net/sunrpc/Kconfig
net/sunrpc/rpc_pipe.c
net/tipc/socket.c
net/unix/af_unix.c
net/vmw_vsock/af_vsock.c
net/wireless/nl80211.c
net/wireless/reg.c
net/wireless/wext-proc.c
net/x25/af_x25.c
net/x25/x25_proc.c
net/xfrm/xfrm_policy.c
net/xfrm/xfrm_proc.c
scripts/checkpatch.pl
scripts/documentation-file-ref-check
scripts/spdxcheck.py [new file with mode: 0755]
security/keys/proc.c
security/selinux/hooks.c
security/selinux/ss/services.c
sound/core/timer.c
sound/pci/hda/hda_local.h
tools/include/uapi/linux/bpf.h
tools/perf/Documentation/perf.data-file-format.txt
tools/perf/tests/topology.c
tools/perf/util/bpf-loader.c
tools/perf/util/cs-etm-decoder/cs-etm-decoder.c
tools/perf/util/evsel.h
tools/perf/util/parse-events.c
tools/perf/util/parse-events.h
tools/perf/util/parse-events.y
tools/perf/util/scripting-engines/trace-event-python.c
tools/testing/radix-tree/idr-test.c
tools/testing/selftests/bpf/config
tools/testing/selftests/net/config
tools/testing/selftests/net/reuseport_bpf_numa.c
tools/virtio/linux/dma-mapping.h
virt/kvm/eventfd.c

index 708dc4c166e487c77aa6f6f4adf5189ab84f3f09..2754fe83f0d449623b7e0ac1fe4df5f061cdf773 100644 (file)
@@ -64,8 +64,6 @@ auxdisplay/
        - misc. LCD driver documentation (cfag12864b, ks0108).
 backlight/
        - directory with info on controlling backlights in flat panel displays
-bcache.txt
-       - Block-layer cache on fast SSDs to improve slow (raid) I/O performance.
 block/
        - info on the Block I/O (BIO) layer.
 blockdev/
@@ -78,18 +76,10 @@ bus-devices/
        - directory with info on TI GPMC (General Purpose Memory Controller)
 bus-virt-phys-mapping.txt
        - how to access I/O mapped memory from within device drivers.
-cachetlb.txt
-       - describes the cache/TLB flushing interfaces Linux uses.
 cdrom/
        - directory with information on the CD-ROM drivers that Linux has.
 cgroup-v1/
        - cgroups v1 features, including cpusets and memory controller.
-cgroup-v2.txt
-       - cgroups v2 features, including cpusets and memory controller.
-circular-buffers.txt
-       - how to make use of the existing circular buffer infrastructure
-clk.txt
-       - info on the common clock framework
 cma/
        - Continuous Memory Area (CMA) debugfs interface.
 conf.py
index 5b2d0f08867cd899df072f89a059995944fb8eec..3e90e1f3bf0a004edbcd2d5b89eed1d05b7aa784 100644 (file)
@@ -90,4 +90,4 @@ Date:         December 2009
 Contact:       Lee Schermerhorn <lee.schermerhorn@hp.com>
 Description:
                The node's huge page size control/query attributes.
-               See Documentation/vm/hugetlbpage.txt
\ No newline at end of file
+               See Documentation/admin-guide/mm/hugetlbpage.rst
\ No newline at end of file
index e21c00571cf4f082b04f12b168e376c500bd845d..fdaa2162fae1528529b7045c6e9184ae9620bc01 100644 (file)
@@ -12,4 +12,4 @@ Description:
                        free_hugepages
                        surplus_hugepages
                        resv_hugepages
-               See Documentation/vm/hugetlbpage.txt for details.
+               See Documentation/admin-guide/mm/hugetlbpage.rst for details.
index 73e653ee248160cb5612d620eeeeff5130f07bc6..dfc13244cda3bb567591fbd6cbcfaee6905731c6 100644 (file)
@@ -40,7 +40,7 @@ Description:  Kernel Samepage Merging daemon sysfs interface
                sleep_millisecs: how many milliseconds ksm should sleep between
                scans.
 
-               See Documentation/vm/ksm.txt for more information.
+               See Documentation/vm/ksm.rst for more information.
 
 What:          /sys/kernel/mm/ksm/merge_across_nodes
 Date:          January 2013
index 2cc0a72b64be68cd5a4191dab5581a71780407e4..29601d93a1c2ea112899007c37a2f7297c655338 100644 (file)
@@ -37,7 +37,7 @@ Description:
                The alloc_calls file is read-only and lists the kernel code
                locations from which allocations for this cache were performed.
                The alloc_calls file only contains information if debugging is
-               enabled for that cache (see Documentation/vm/slub.txt).
+               enabled for that cache (see Documentation/vm/slub.rst).
 
 What:          /sys/kernel/slab/cache/alloc_fastpath
 Date:          February 2008
@@ -219,7 +219,7 @@ Contact:    Pekka Enberg <penberg@cs.helsinki.fi>,
 Description:
                The free_calls file is read-only and lists the locations of
                object frees if slab debugging is enabled (see
-               Documentation/vm/slub.txt).
+               Documentation/vm/slub.rst).
 
 What:          /sys/kernel/slab/cache/free_fastpath
 Date:          February 2008
diff --git a/Documentation/admin-guide/bcache.rst b/Documentation/admin-guide/bcache.rst
new file mode 100644 (file)
index 0000000..c0ce64d
--- /dev/null
@@ -0,0 +1,649 @@
+============================
+A block layer cache (bcache)
+============================
+
+Say you've got a big slow raid 6, and an ssd or three. Wouldn't it be
+nice if you could use them as cache... Hence bcache.
+
+Wiki and git repositories are at:
+
+  - http://bcache.evilpiepirate.org
+  - http://evilpiepirate.org/git/linux-bcache.git
+  - http://evilpiepirate.org/git/bcache-tools.git
+
+It's designed around the performance characteristics of SSDs - it only allocates
+in erase block sized buckets, and it uses a hybrid btree/log to track cached
+extents (which can be anywhere from a single sector to the bucket size). It's
+designed to avoid random writes at all costs; it fills up an erase block
+sequentially, then issues a discard before reusing it.
+
+Both writethrough and writeback caching are supported. Writeback defaults to
+off, but can be switched on and off arbitrarily at runtime. Bcache goes to
+great lengths to protect your data - it reliably handles unclean shutdown. (It
+doesn't even have a notion of a clean shutdown; bcache simply doesn't return
+writes as completed until they're on stable storage).
+
+Writeback caching can use most of the cache for buffering writes - writing
+dirty data to the backing device is always done sequentially, scanning from the
+start to the end of the index.
+
+Since random IO is what SSDs excel at, there generally won't be much benefit
+to caching large sequential IO. Bcache detects sequential IO and skips it;
+it also keeps a rolling average of the IO sizes per task, and as long as the
+average is above the cutoff it will skip all IO from that task - instead of
+caching the first 512k after every seek. Backups and large file copies should
+thus entirely bypass the cache.
+
+In the event of a data IO error on the flash it will try to recover by reading
+from disk or invalidating cache entries.  For unrecoverable errors (meta data
+or dirty data), caching is automatically disabled; if dirty data was present
+in the cache it first disables writeback caching and waits for all dirty data
+to be flushed.
+
+Getting started:
+You'll need make-bcache from the bcache-tools repository. Both the cache device
+and backing device must be formatted before use::
+
+  make-bcache -B /dev/sdb
+  make-bcache -C /dev/sdc
+
+make-bcache has the ability to format multiple devices at the same time - if
+you format your backing devices and cache device at the same time, you won't
+have to manually attach::
+
+  make-bcache -B /dev/sda /dev/sdb -C /dev/sdc
+
+bcache-tools now ships udev rules, and bcache devices are known to the kernel
+immediately.  Without udev, you can manually register devices like this::
+
+  echo /dev/sdb > /sys/fs/bcache/register
+  echo /dev/sdc > /sys/fs/bcache/register
+
+Registering the backing device makes the bcache device show up in /dev; you can
+now format it and use it as normal. But the first time using a new bcache
+device, it'll be running in passthrough mode until you attach it to a cache.
+If you are thinking about using bcache later, it is recommended to setup all your
+slow devices as bcache backing devices without a cache, and you can choose to add
+a caching device later.
+See 'ATTACHING' section below.
+
+The devices show up as::
+
+  /dev/bcache<N>
+
+As well as (with udev)::
+
+  /dev/bcache/by-uuid/<uuid>
+  /dev/bcache/by-label/<label>
+
+To get started::
+
+  mkfs.ext4 /dev/bcache0
+  mount /dev/bcache0 /mnt
+
+You can control bcache devices through sysfs at /sys/block/bcache<N>/bcache .
+You can also control them through /sys/fs//bcache/<cset-uuid>/ .
+
+Cache devices are managed as sets; multiple caches per set isn't supported yet
+but will allow for mirroring of metadata and dirty data in the future. Your new
+cache set shows up as /sys/fs/bcache/<UUID>
+
+Attaching
+---------
+
+After your cache device and backing device are registered, the backing device
+must be attached to your cache set to enable caching. Attaching a backing
+device to a cache set is done thusly, with the UUID of the cache set in
+/sys/fs/bcache::
+
+  echo <CSET-UUID> > /sys/block/bcache0/bcache/attach
+
+This only has to be done once. The next time you reboot, just reregister all
+your bcache devices. If a backing device has data in a cache somewhere, the
+/dev/bcache<N> device won't be created until the cache shows up - particularly
+important if you have writeback caching turned on.
+
+If you're booting up and your cache device is gone and never coming back, you
+can force run the backing device::
+
+  echo 1 > /sys/block/sdb/bcache/running
+
+(You need to use /sys/block/sdb (or whatever your backing device is called), not
+/sys/block/bcache0, because bcache0 doesn't exist yet. If you're using a
+partition, the bcache directory would be at /sys/block/sdb/sdb2/bcache)
+
+The backing device will still use that cache set if it shows up in the future,
+but all the cached data will be invalidated. If there was dirty data in the
+cache, don't expect the filesystem to be recoverable - you will have massive
+filesystem corruption, though ext4's fsck does work miracles.
+
+Error Handling
+--------------
+
+Bcache tries to transparently handle IO errors to/from the cache device without
+affecting normal operation; if it sees too many errors (the threshold is
+configurable, and defaults to 0) it shuts down the cache device and switches all
+the backing devices to passthrough mode.
+
+ - For reads from the cache, if they error we just retry the read from the
+   backing device.
+
+ - For writethrough writes, if the write to the cache errors we just switch to
+   invalidating the data at that lba in the cache (i.e. the same thing we do for
+   a write that bypasses the cache)
+
+ - For writeback writes, we currently pass that error back up to the
+   filesystem/userspace. This could be improved - we could retry it as a write
+   that skips the cache so we don't have to error the write.
+
+ - When we detach, we first try to flush any dirty data (if we were running in
+   writeback mode). It currently doesn't do anything intelligent if it fails to
+   read some of the dirty data, though.
+
+
+Howto/cookbook
+--------------
+
+A) Starting a bcache with a missing caching device
+
+If registering the backing device doesn't help, it's already there, you just need
+to force it to run without the cache::
+
+       host:~# echo /dev/sdb1 > /sys/fs/bcache/register
+       [  119.844831] bcache: register_bcache() error opening /dev/sdb1: device already registered
+
+Next, you try to register your caching device if it's present. However
+if it's absent, or registration fails for some reason, you can still
+start your bcache without its cache, like so::
+
+       host:/sys/block/sdb/sdb1/bcache# echo 1 > running
+
+Note that this may cause data loss if you were running in writeback mode.
+
+
+B) Bcache does not find its cache::
+
+       host:/sys/block/md5/bcache# echo 0226553a-37cf-41d5-b3ce-8b1e944543a8 > attach
+       [ 1933.455082] bcache: bch_cached_dev_attach() Couldn't find uuid for md5 in set
+       [ 1933.478179] bcache: __cached_dev_store() Can't attach 0226553a-37cf-41d5-b3ce-8b1e944543a8
+       [ 1933.478179] : cache set not found
+
+In this case, the caching device was simply not registered at boot
+or disappeared and came back, and needs to be (re-)registered::
+
+       host:/sys/block/md5/bcache# echo /dev/sdh2 > /sys/fs/bcache/register
+
+
+C) Corrupt bcache crashes the kernel at device registration time:
+
+This should never happen.  If it does happen, then you have found a bug!
+Please report it to the bcache development list: linux-bcache@vger.kernel.org
+
+Be sure to provide as much information that you can including kernel dmesg
+output if available so that we may assist.
+
+
+D) Recovering data without bcache:
+
+If bcache is not available in the kernel, a filesystem on the backing
+device is still available at an 8KiB offset. So either via a loopdev
+of the backing device created with --offset 8K, or any value defined by
+--data-offset when you originally formatted bcache with `make-bcache`.
+
+For example::
+
+       losetup -o 8192 /dev/loop0 /dev/your_bcache_backing_dev
+
+This should present your unmodified backing device data in /dev/loop0
+
+If your cache is in writethrough mode, then you can safely discard the
+cache device without loosing data.
+
+
+E) Wiping a cache device
+
+::
+
+       host:~# wipefs -a /dev/sdh2
+       16 bytes were erased at offset 0x1018 (bcache)
+       they were: c6 85 73 f6 4e 1a 45 ca 82 65 f5 7f 48 ba 6d 81
+
+After you boot back with bcache enabled, you recreate the cache and attach it::
+
+       host:~# make-bcache -C /dev/sdh2
+       UUID:                   7be7e175-8f4c-4f99-94b2-9c904d227045
+       Set UUID:               5bc072a8-ab17-446d-9744-e247949913c1
+       version:                0
+       nbuckets:               106874
+       block_size:             1
+       bucket_size:            1024
+       nr_in_set:              1
+       nr_this_dev:            0
+       first_bucket:           1
+       [  650.511912] bcache: run_cache_set() invalidating existing data
+       [  650.549228] bcache: register_cache() registered cache device sdh2
+
+start backing device with missing cache::
+
+       host:/sys/block/md5/bcache# echo 1 > running
+
+attach new cache::
+
+       host:/sys/block/md5/bcache# echo 5bc072a8-ab17-446d-9744-e247949913c1 > attach
+       [  865.276616] bcache: bch_cached_dev_attach() Caching md5 as bcache0 on set 5bc072a8-ab17-446d-9744-e247949913c1
+
+
+F) Remove or replace a caching device::
+
+       host:/sys/block/sda/sda7/bcache# echo 1 > detach
+       [  695.872542] bcache: cached_dev_detach_finish() Caching disabled for sda7
+
+       host:~# wipefs -a /dev/nvme0n1p4
+       wipefs: error: /dev/nvme0n1p4: probing initialization failed: Device or resource busy
+       Ooops, it's disabled, but not unregistered, so it's still protected
+
+We need to go and unregister it::
+
+       host:/sys/fs/bcache/b7ba27a1-2398-4649-8ae3-0959f57ba128# ls -l cache0
+       lrwxrwxrwx 1 root root 0 Feb 25 18:33 cache0 -> ../../../devices/pci0000:00/0000:00:1d.0/0000:70:00.0/nvme/nvme0/nvme0n1/nvme0n1p4/bcache/
+       host:/sys/fs/bcache/b7ba27a1-2398-4649-8ae3-0959f57ba128# echo 1 > stop
+       kernel: [  917.041908] bcache: cache_set_free() Cache set b7ba27a1-2398-4649-8ae3-0959f57ba128 unregistered
+
+Now we can wipe it::
+
+       host:~# wipefs -a /dev/nvme0n1p4
+       /dev/nvme0n1p4: 16 bytes were erased at offset 0x00001018 (bcache): c6 85 73 f6 4e 1a 45 ca 82 65 f5 7f 48 ba 6d 81
+
+
+G) dm-crypt and bcache
+
+First setup bcache unencrypted and then install dmcrypt on top of
+/dev/bcache<N> This will work faster than if you dmcrypt both the backing
+and caching devices and then install bcache on top. [benchmarks?]
+
+
+H) Stop/free a registered bcache to wipe and/or recreate it
+
+Suppose that you need to free up all bcache references so that you can
+fdisk run and re-register a changed partition table, which won't work
+if there are any active backing or caching devices left on it:
+
+1) Is it present in /dev/bcache* ? (there are times where it won't be)
+
+   If so, it's easy::
+
+       host:/sys/block/bcache0/bcache# echo 1 > stop
+
+2) But if your backing device is gone, this won't work::
+
+       host:/sys/block/bcache0# cd bcache
+       bash: cd: bcache: No such file or directory
+
+   In this case, you may have to unregister the dmcrypt block device that
+   references this bcache to free it up::
+
+       host:~# dmsetup remove oldds1
+       bcache: bcache_device_free() bcache0 stopped
+       bcache: cache_set_free() Cache set 5bc072a8-ab17-446d-9744-e247949913c1 unregistered
+
+   This causes the backing bcache to be removed from /sys/fs/bcache and
+   then it can be reused.  This would be true of any block device stacking
+   where bcache is a lower device.
+
+3) In other cases, you can also look in /sys/fs/bcache/::
+
+       host:/sys/fs/bcache# ls -l */{cache?,bdev?}
+       lrwxrwxrwx 1 root root 0 Mar  5 09:39 0226553a-37cf-41d5-b3ce-8b1e944543a8/bdev1 -> ../../../devices/virtual/block/dm-1/bcache/
+       lrwxrwxrwx 1 root root 0 Mar  5 09:39 0226553a-37cf-41d5-b3ce-8b1e944543a8/cache0 -> ../../../devices/virtual/block/dm-4/bcache/
+       lrwxrwxrwx 1 root root 0 Mar  5 09:39 5bc072a8-ab17-446d-9744-e247949913c1/cache0 -> ../../../devices/pci0000:00/0000:00:01.0/0000:01:00.0/ata10/host9/target9:0:0/9:0:0:0/block/sdl/sdl2/bcache/
+
+   The device names will show which UUID is relevant, cd in that directory
+   and stop the cache::
+
+       host:/sys/fs/bcache/5bc072a8-ab17-446d-9744-e247949913c1# echo 1 > stop
+
+   This will free up bcache references and let you reuse the partition for
+   other purposes.
+
+
+
+Troubleshooting performance
+---------------------------
+
+Bcache has a bunch of config options and tunables. The defaults are intended to
+be reasonable for typical desktop and server workloads, but they're not what you
+want for getting the best possible numbers when benchmarking.
+
+ - Backing device alignment
+
+   The default metadata size in bcache is 8k.  If your backing device is
+   RAID based, then be sure to align this by a multiple of your stride
+   width using `make-bcache --data-offset`. If you intend to expand your
+   disk array in the future, then multiply a series of primes by your
+   raid stripe size to get the disk multiples that you would like.
+
+   For example:  If you have a 64k stripe size, then the following offset
+   would provide alignment for many common RAID5 data spindle counts::
+
+       64k * 2*2*2*3*3*5*7 bytes = 161280k
+
+   That space is wasted, but for only 157.5MB you can grow your RAID 5
+   volume to the following data-spindle counts without re-aligning::
+
+       3,4,5,6,7,8,9,10,12,14,15,18,20,21 ...
+
+ - Bad write performance
+
+   If write performance is not what you expected, you probably wanted to be
+   running in writeback mode, which isn't the default (not due to a lack of
+   maturity, but simply because in writeback mode you'll lose data if something
+   happens to your SSD)::
+
+       # echo writeback > /sys/block/bcache0/bcache/cache_mode
+
+ - Bad performance, or traffic not going to the SSD that you'd expect
+
+   By default, bcache doesn't cache everything. It tries to skip sequential IO -
+   because you really want to be caching the random IO, and if you copy a 10
+   gigabyte file you probably don't want that pushing 10 gigabytes of randomly
+   accessed data out of your cache.
+
+   But if you want to benchmark reads from cache, and you start out with fio
+   writing an 8 gigabyte test file - so you want to disable that::
+
+       # echo 0 > /sys/block/bcache0/bcache/sequential_cutoff
+
+   To set it back to the default (4 mb), do::
+
+       # echo 4M > /sys/block/bcache0/bcache/sequential_cutoff
+
+ - Traffic's still going to the spindle/still getting cache misses
+
+   In the real world, SSDs don't always keep up with disks - particularly with
+   slower SSDs, many disks being cached by one SSD, or mostly sequential IO. So
+   you want to avoid being bottlenecked by the SSD and having it slow everything
+   down.
+
+   To avoid that bcache tracks latency to the cache device, and gradually
+   throttles traffic if the latency exceeds a threshold (it does this by
+   cranking down the sequential bypass).
+
+   You can disable this if you need to by setting the thresholds to 0::
+
+       # echo 0 > /sys/fs/bcache/<cache set>/congested_read_threshold_us
+       # echo 0 > /sys/fs/bcache/<cache set>/congested_write_threshold_us
+
+   The default is 2000 us (2 milliseconds) for reads, and 20000 for writes.
+
+ - Still getting cache misses, of the same data
+
+   One last issue that sometimes trips people up is actually an old bug, due to
+   the way cache coherency is handled for cache misses. If a btree node is full,
+   a cache miss won't be able to insert a key for the new data and the data
+   won't be written to the cache.
+
+   In practice this isn't an issue because as soon as a write comes along it'll
+   cause the btree node to be split, and you need almost no write traffic for
+   this to not show up enough to be noticeable (especially since bcache's btree
+   nodes are huge and index large regions of the device). But when you're
+   benchmarking, if you're trying to warm the cache by reading a bunch of data
+   and there's no other traffic - that can be a problem.
+
+   Solution: warm the cache by doing writes, or use the testing branch (there's
+   a fix for the issue there).
+
+
+Sysfs - backing device
+----------------------
+
+Available at /sys/block/<bdev>/bcache, /sys/block/bcache*/bcache and
+(if attached) /sys/fs/bcache/<cset-uuid>/bdev*
+
+attach
+  Echo the UUID of a cache set to this file to enable caching.
+
+cache_mode
+  Can be one of either writethrough, writeback, writearound or none.
+
+clear_stats
+  Writing to this file resets the running total stats (not the day/hour/5 minute
+  decaying versions).
+
+detach
+  Write to this file to detach from a cache set. If there is dirty data in the
+  cache, it will be flushed first.
+
+dirty_data
+  Amount of dirty data for this backing device in the cache. Continuously
+  updated unlike the cache set's version, but may be slightly off.
+
+label
+  Name of underlying device.
+
+readahead
+  Size of readahead that should be performed.  Defaults to 0.  If set to e.g.
+  1M, it will round cache miss reads up to that size, but without overlapping
+  existing cache entries.
+
+running
+  1 if bcache is running (i.e. whether the /dev/bcache device exists, whether
+  it's in passthrough mode or caching).
+
+sequential_cutoff
+  A sequential IO will bypass the cache once it passes this threshold; the
+  most recent 128 IOs are tracked so sequential IO can be detected even when
+  it isn't all done at once.
+
+sequential_merge
+  If non zero, bcache keeps a list of the last 128 requests submitted to compare
+  against all new requests to determine which new requests are sequential
+  continuations of previous requests for the purpose of determining sequential
+  cutoff. This is necessary if the sequential cutoff value is greater than the
+  maximum acceptable sequential size for any single request.
+
+state
+  The backing device can be in one of four different states:
+
+  no cache: Has never been attached to a cache set.
+
+  clean: Part of a cache set, and there is no cached dirty data.
+
+  dirty: Part of a cache set, and there is cached dirty data.
+
+  inconsistent: The backing device was forcibly run by the user when there was
+  dirty data cached but the cache set was unavailable; whatever data was on the
+  backing device has likely been corrupted.
+
+stop
+  Write to this file to shut down the bcache device and close the backing
+  device.
+
+writeback_delay
+  When dirty data is written to the cache and it previously did not contain
+  any, waits some number of seconds before initiating writeback. Defaults to
+  30.
+
+writeback_percent
+  If nonzero, bcache tries to keep around this percentage of the cache dirty by
+  throttling background writeback and using a PD controller to smoothly adjust
+  the rate.
+
+writeback_rate
+  Rate in sectors per second - if writeback_percent is nonzero, background
+  writeback is throttled to this rate. Continuously adjusted by bcache but may
+  also be set by the user.
+
+writeback_running
+  If off, writeback of dirty data will not take place at all. Dirty data will
+  still be added to the cache until it is mostly full; only meant for
+  benchmarking. Defaults to on.
+
+Sysfs - backing device stats
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+There are directories with these numbers for a running total, as well as
+versions that decay over the past day, hour and 5 minutes; they're also
+aggregated in the cache set directory as well.
+
+bypassed
+  Amount of IO (both reads and writes) that has bypassed the cache
+
+cache_hits, cache_misses, cache_hit_ratio
+  Hits and misses are counted per individual IO as bcache sees them; a
+  partial hit is counted as a miss.
+
+cache_bypass_hits, cache_bypass_misses
+  Hits and misses for IO that is intended to skip the cache are still counted,
+  but broken out here.
+
+cache_miss_collisions
+  Counts instances where data was going to be inserted into the cache from a
+  cache miss, but raced with a write and data was already present (usually 0
+  since the synchronization for cache misses was rewritten)
+
+cache_readaheads
+  Count of times readahead occurred.
+
+Sysfs - cache set
+~~~~~~~~~~~~~~~~~
+
+Available at /sys/fs/bcache/<cset-uuid>
+
+average_key_size
+  Average data per key in the btree.
+
+bdev<0..n>
+  Symlink to each of the attached backing devices.
+
+block_size
+  Block size of the cache devices.
+
+btree_cache_size
+  Amount of memory currently used by the btree cache
+
+bucket_size
+  Size of buckets
+
+cache<0..n>
+  Symlink to each of the cache devices comprising this cache set.
+
+cache_available_percent
+  Percentage of cache device which doesn't contain dirty data, and could
+  potentially be used for writeback.  This doesn't mean this space isn't used
+  for clean cached data; the unused statistic (in priority_stats) is typically
+  much lower.
+
+clear_stats
+  Clears the statistics associated with this cache
+
+dirty_data
+  Amount of dirty data is in the cache (updated when garbage collection runs).
+
+flash_vol_create
+  Echoing a size to this file (in human readable units, k/M/G) creates a thinly
+  provisioned volume backed by the cache set.
+
+io_error_halflife, io_error_limit
+  These determines how many errors we accept before disabling the cache.
+  Each error is decayed by the half life (in # ios).  If the decaying count
+  reaches io_error_limit dirty data is written out and the cache is disabled.
+
+journal_delay_ms
+  Journal writes will delay for up to this many milliseconds, unless a cache
+  flush happens sooner. Defaults to 100.
+
+root_usage_percent
+  Percentage of the root btree node in use.  If this gets too high the node
+  will split, increasing the tree depth.
+
+stop
+  Write to this file to shut down the cache set - waits until all attached
+  backing devices have been shut down.
+
+tree_depth
+  Depth of the btree (A single node btree has depth 0).
+
+unregister
+  Detaches all backing devices and closes the cache devices; if dirty data is
+  present it will disable writeback caching and wait for it to be flushed.
+
+Sysfs - cache set internal
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+This directory also exposes timings for a number of internal operations, with
+separate files for average duration, average frequency, last occurrence and max
+duration: garbage collection, btree read, btree node sorts and btree splits.
+
+active_journal_entries
+  Number of journal entries that are newer than the index.
+
+btree_nodes
+  Total nodes in the btree.
+
+btree_used_percent
+  Average fraction of btree in use.
+
+bset_tree_stats
+  Statistics about the auxiliary search trees
+
+btree_cache_max_chain
+  Longest chain in the btree node cache's hash table
+
+cache_read_races
+  Counts instances where while data was being read from the cache, the bucket
+  was reused and invalidated - i.e. where the pointer was stale after the read
+  completed. When this occurs the data is reread from the backing device.
+
+trigger_gc
+  Writing to this file forces garbage collection to run.
+
+Sysfs - Cache device
+~~~~~~~~~~~~~~~~~~~~
+
+Available at /sys/block/<cdev>/bcache
+
+block_size
+  Minimum granularity of writes - should match hardware sector size.
+
+btree_written
+  Sum of all btree writes, in (kilo/mega/giga) bytes
+
+bucket_size
+  Size of buckets
+
+cache_replacement_policy
+  One of either lru, fifo or random.
+
+discard
+  Boolean; if on a discard/TRIM will be issued to each bucket before it is
+  reused. Defaults to off, since SATA TRIM is an unqueued command (and thus
+  slow).
+
+freelist_percent
+  Size of the freelist as a percentage of nbuckets. Can be written to to
+  increase the number of buckets kept on the freelist, which lets you
+  artificially reduce the size of the cache at runtime. Mostly for testing
+  purposes (i.e. testing how different size caches affect your hit rate), but
+  since buckets are discarded when they move on to the freelist will also make
+  the SSD's garbage collection easier by effectively giving it more reserved
+  space.
+
+io_errors
+  Number of errors that have occurred, decayed by io_error_halflife.
+
+metadata_written
+  Sum of all non data writes (btree writes and all other metadata).
+
+nbuckets
+  Total buckets in this cache
+
+priority_stats
+  Statistics about how recently data in the cache has been accessed.
+  This can reveal your working set size.  Unused is the percentage of
+  the cache that doesn't contain any data.  Metadata is bcache's
+  metadata overhead.  Average is the average priority of cache buckets.
+  Next is a list of quantiles with the priority threshold of each.
+
+written
+  Sum of all data that has been written to the cache; comparison with
+  btree_written gives the amount of write inflation in bcache.
diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst
new file mode 100644 (file)
index 0000000..74cdeae
--- /dev/null
@@ -0,0 +1,1998 @@
+================
+Control Group v2
+================
+
+:Date: October, 2015
+:Author: Tejun Heo <tj@kernel.org>
+
+This is the authoritative documentation on the design, interface and
+conventions of cgroup v2.  It describes all userland-visible aspects
+of cgroup including core and specific controller behaviors.  All
+future changes must be reflected in this document.  Documentation for
+v1 is available under Documentation/cgroup-v1/.
+
+.. CONTENTS
+
+   1. Introduction
+     1-1. Terminology
+     1-2. What is cgroup?
+   2. Basic Operations
+     2-1. Mounting
+     2-2. Organizing Processes and Threads
+       2-2-1. Processes
+       2-2-2. Threads
+     2-3. [Un]populated Notification
+     2-4. Controlling Controllers
+       2-4-1. Enabling and Disabling
+       2-4-2. Top-down Constraint
+       2-4-3. No Internal Process Constraint
+     2-5. Delegation
+       2-5-1. Model of Delegation
+       2-5-2. Delegation Containment
+     2-6. Guidelines
+       2-6-1. Organize Once and Control
+       2-6-2. Avoid Name Collisions
+   3. Resource Distribution Models
+     3-1. Weights
+     3-2. Limits
+     3-3. Protections
+     3-4. Allocations
+   4. Interface Files
+     4-1. Format
+     4-2. Conventions
+     4-3. Core Interface Files
+   5. Controllers
+     5-1. CPU
+       5-1-1. CPU Interface Files
+     5-2. Memory
+       5-2-1. Memory Interface Files
+       5-2-2. Usage Guidelines
+       5-2-3. Memory Ownership
+     5-3. IO
+       5-3-1. IO Interface Files
+       5-3-2. Writeback
+     5-4. PID
+       5-4-1. PID Interface Files
+     5-5. Device
+     5-6. RDMA
+       5-6-1. RDMA Interface Files
+     5-7. Misc
+       5-7-1. perf_event
+     5-N. Non-normative information
+       5-N-1. CPU controller root cgroup process behaviour
+       5-N-2. IO controller root cgroup process behaviour
+   6. Namespace
+     6-1. Basics
+     6-2. The Root and Views
+     6-3. Migration and setns(2)
+     6-4. Interaction with Other Namespaces
+   P. Information on Kernel Programming
+     P-1. Filesystem Support for Writeback
+   D. Deprecated v1 Core Features
+   R. Issues with v1 and Rationales for v2
+     R-1. Multiple Hierarchies
+     R-2. Thread Granularity
+     R-3. Competition Between Inner Nodes and Threads
+     R-4. Other Interface Issues
+     R-5. Controller Issues and Remedies
+       R-5-1. Memory
+
+
+Introduction
+============
+
+Terminology
+-----------
+
+"cgroup" stands for "control group" and is never capitalized.  The
+singular form is used to designate the whole feature and also as a
+qualifier as in "cgroup controllers".  When explicitly referring to
+multiple individual control groups, the plural form "cgroups" is used.
+
+
+What is cgroup?
+---------------
+
+cgroup is a mechanism to organize processes hierarchically and
+distribute system resources along the hierarchy in a controlled and
+configurable manner.
+
+cgroup is largely composed of two parts - the core and controllers.
+cgroup core is primarily responsible for hierarchically organizing
+processes.  A cgroup controller is usually responsible for
+distributing a specific type of system resource along the hierarchy
+although there are utility controllers which serve purposes other than
+resource distribution.
+
+cgroups form a tree structure and every process in the system belongs
+to one and only one cgroup.  All threads of a process belong to the
+same cgroup.  On creation, all processes are put in the cgroup that
+the parent process belongs to at the time.  A process can be migrated
+to another cgroup.  Migration of a process doesn't affect already
+existing descendant processes.
+
+Following certain structural constraints, controllers may be enabled or
+disabled selectively on a cgroup.  All controller behaviors are
+hierarchical - if a controller is enabled on a cgroup, it affects all
+processes which belong to the cgroups consisting the inclusive
+sub-hierarchy of the cgroup.  When a controller is enabled on a nested
+cgroup, it always restricts the resource distribution further.  The
+restrictions set closer to the root in the hierarchy can not be
+overridden from further away.
+
+
+Basic Operations
+================
+
+Mounting
+--------
+
+Unlike v1, cgroup v2 has only single hierarchy.  The cgroup v2
+hierarchy can be mounted with the following mount command::
+
+  # mount -t cgroup2 none $MOUNT_POINT
+
+cgroup2 filesystem has the magic number 0x63677270 ("cgrp").  All
+controllers which support v2 and are not bound to a v1 hierarchy are
+automatically bound to the v2 hierarchy and show up at the root.
+Controllers which are not in active use in the v2 hierarchy can be
+bound to other hierarchies.  This allows mixing v2 hierarchy with the
+legacy v1 multiple hierarchies in a fully backward compatible way.
+
+A controller can be moved across hierarchies only after the controller
+is no longer referenced in its current hierarchy.  Because per-cgroup
+controller states are destroyed asynchronously and controllers may
+have lingering references, a controller may not show up immediately on
+the v2 hierarchy after the final umount of the previous hierarchy.
+Similarly, a controller should be fully disabled to be moved out of
+the unified hierarchy and it may take some time for the disabled
+controller to become available for other hierarchies; furthermore, due
+to inter-controller dependencies, other controllers may need to be
+disabled too.
+
+While useful for development and manual configurations, moving
+controllers dynamically between the v2 and other hierarchies is
+strongly discouraged for production use.  It is recommended to decide
+the hierarchies and controller associations before starting using the
+controllers after system boot.
+
+During transition to v2, system management software might still
+automount the v1 cgroup filesystem and so hijack all controllers
+during boot, before manual intervention is possible. To make testing
+and experimenting easier, the kernel parameter cgroup_no_v1= allows
+disabling controllers in v1 and make them always available in v2.
+
+cgroup v2 currently supports the following mount options.
+
+  nsdelegate
+
+       Consider cgroup namespaces as delegation boundaries.  This
+       option is system wide and can only be set on mount or modified
+       through remount from the init namespace.  The mount option is
+       ignored on non-init namespace mounts.  Please refer to the
+       Delegation section for details.
+
+
+Organizing Processes and Threads
+--------------------------------
+
+Processes
+~~~~~~~~~
+
+Initially, only the root cgroup exists to which all processes belong.
+A child cgroup can be created by creating a sub-directory::
+
+  # mkdir $CGROUP_NAME
+
+A given cgroup may have multiple child cgroups forming a tree
+structure.  Each cgroup has a read-writable interface file
+"cgroup.procs".  When read, it lists the PIDs of all processes which
+belong to the cgroup one-per-line.  The PIDs are not ordered and the
+same PID may show up more than once if the process got moved to
+another cgroup and then back or the PID got recycled while reading.
+
+A process can be migrated into a cgroup by writing its PID to the
+target cgroup's "cgroup.procs" file.  Only one process can be migrated
+on a single write(2) call.  If a process is composed of multiple
+threads, writing the PID of any thread migrates all threads of the
+process.
+
+When a process forks a child process, the new process is born into the
+cgroup that the forking process belongs to at the time of the
+operation.  After exit, a process stays associated with the cgroup
+that it belonged to at the time of exit until it's reaped; however, a
+zombie process does not appear in "cgroup.procs" and thus can't be
+moved to another cgroup.
+
+A cgroup which doesn't have any children or live processes can be
+destroyed by removing the directory.  Note that a cgroup which doesn't
+have any children and is associated only with zombie processes is
+considered empty and can be removed::
+
+  # rmdir $CGROUP_NAME
+
+"/proc/$PID/cgroup" lists a process's cgroup membership.  If legacy
+cgroup is in use in the system, this file may contain multiple lines,
+one for each hierarchy.  The entry for cgroup v2 is always in the
+format "0::$PATH"::
+
+  # cat /proc/842/cgroup
+  ...
+  0::/test-cgroup/test-cgroup-nested
+
+If the process becomes a zombie and the cgroup it was associated with
+is removed subsequently, " (deleted)" is appended to the path::
+
+  # cat /proc/842/cgroup
+  ...
+  0::/test-cgroup/test-cgroup-nested (deleted)
+
+
+Threads
+~~~~~~~
+
+cgroup v2 supports thread granularity for a subset of controllers to
+support use cases requiring hierarchical resource distribution across
+the threads of a group of processes.  By default, all threads of a
+process belong to the same cgroup, which also serves as the resource
+domain to host resource consumptions which are not specific to a
+process or thread.  The thread mode allows threads to be spread across
+a subtree while still maintaining the common resource domain for them.
+
+Controllers which support thread mode are called threaded controllers.
+The ones which don't are called domain controllers.
+
+Marking a cgroup threaded makes it join the resource domain of its
+parent as a threaded cgroup.  The parent may be another threaded
+cgroup whose resource domain is further up in the hierarchy.  The root
+of a threaded subtree, that is, the nearest ancestor which is not
+threaded, is called threaded domain or thread root interchangeably and
+serves as the resource domain for the entire subtree.
+
+Inside a threaded subtree, threads of a process can be put in
+different cgroups and are not subject to the no internal process
+constraint - threaded controllers can be enabled on non-leaf cgroups
+whether they have threads in them or not.
+
+As the threaded domain cgroup hosts all the domain resource
+consumptions of the subtree, it is considered to have internal
+resource consumptions whether there are processes in it or not and
+can't have populated child cgroups which aren't threaded.  Because the
+root cgroup is not subject to no internal process constraint, it can
+serve both as a threaded domain and a parent to domain cgroups.
+
+The current operation mode or type of the cgroup is shown in the
+"cgroup.type" file which indicates whether the cgroup is a normal
+domain, a domain which is serving as the domain of a threaded subtree,
+or a threaded cgroup.
+
+On creation, a cgroup is always a domain cgroup and can be made
+threaded by writing "threaded" to the "cgroup.type" file.  The
+operation is single direction::
+
+  # echo threaded > cgroup.type
+
+Once threaded, the cgroup can't be made a domain again.  To enable the
+thread mode, the following conditions must be met.
+
+- As the cgroup will join the parent's resource domain.  The parent
+  must either be a valid (threaded) domain or a threaded cgroup.
+
+- When the parent is an unthreaded domain, it must not have any domain
+  controllers enabled or populated domain children.  The root is
+  exempt from this requirement.
+
+Topology-wise, a cgroup can be in an invalid state.  Please consider
+the following topology::
+
+  A (threaded domain) - B (threaded) - C (domain, just created)
+
+C is created as a domain but isn't connected to a parent which can
+host child domains.  C can't be used until it is turned into a
+threaded cgroup.  "cgroup.type" file will report "domain (invalid)" in
+these cases.  Operations which fail due to invalid topology use
+EOPNOTSUPP as the errno.
+
+A domain cgroup is turned into a threaded domain when one of its child
+cgroup becomes threaded or threaded controllers are enabled in the
+"cgroup.subtree_control" file while there are processes in the cgroup.
+A threaded domain reverts to a normal domain when the conditions
+clear.
+
+When read, "cgroup.threads" contains the list of the thread IDs of all
+threads in the cgroup.  Except that the operations are per-thread
+instead of per-process, "cgroup.threads" has the same format and
+behaves the same way as "cgroup.procs".  While "cgroup.threads" can be
+written to in any cgroup, as it can only move threads inside the same
+threaded domain, its operations are confined inside each threaded
+subtree.
+
+The threaded domain cgroup serves as the resource domain for the whole
+subtree, and, while the threads can be scattered across the subtree,
+all the processes are considered to be in the threaded domain cgroup.
+"cgroup.procs" in a threaded domain cgroup contains the PIDs of all
+processes in the subtree and is not readable in the subtree proper.
+However, "cgroup.procs" can be written to from anywhere in the subtree
+to migrate all threads of the matching process to the cgroup.
+
+Only threaded controllers can be enabled in a threaded subtree.  When
+a threaded controller is enabled inside a threaded subtree, it only
+accounts for and controls resource consumptions associated with the
+threads in the cgroup and its descendants.  All consumptions which
+aren't tied to a specific thread belong to the threaded domain cgroup.
+
+Because a threaded subtree is exempt from no internal process
+constraint, a threaded controller must be able to handle competition
+between threads in a non-leaf cgroup and its child cgroups.  Each
+threaded controller defines how such competitions are handled.
+
+
+[Un]populated Notification
+--------------------------
+
+Each non-root cgroup has a "cgroup.events" file which contains
+"populated" field indicating whether the cgroup's sub-hierarchy has
+live processes in it.  Its value is 0 if there is no live process in
+the cgroup and its descendants; otherwise, 1.  poll and [id]notify
+events are triggered when the value changes.  This can be used, for
+example, to start a clean-up operation after all processes of a given
+sub-hierarchy have exited.  The populated state updates and
+notifications are recursive.  Consider the following sub-hierarchy
+where the numbers in the parentheses represent the numbers of processes
+in each cgroup::
+
+  A(4) - B(0) - C(1)
+              \ D(0)
+
+A, B and C's "populated" fields would be 1 while D's 0.  After the one
+process in C exits, B and C's "populated" fields would flip to "0" and
+file modified events will be generated on the "cgroup.events" files of
+both cgroups.
+
+
+Controlling Controllers
+-----------------------
+
+Enabling and Disabling
+~~~~~~~~~~~~~~~~~~~~~~
+
+Each cgroup has a "cgroup.controllers" file which lists all
+controllers available for the cgroup to enable::
+
+  # cat cgroup.controllers
+  cpu io memory
+
+No controller is enabled by default.  Controllers can be enabled and
+disabled by writing to the "cgroup.subtree_control" file::
+
+  # echo "+cpu +memory -io" > cgroup.subtree_control
+
+Only controllers which are listed in "cgroup.controllers" can be
+enabled.  When multiple operations are specified as above, either they
+all succeed or fail.  If multiple operations on the same controller
+are specified, the last one is effective.
+
+Enabling a controller in a cgroup indicates that the distribution of
+the target resource across its immediate children will be controlled.
+Consider the following sub-hierarchy.  The enabled controllers are
+listed in parentheses::
+
+  A(cpu,memory) - B(memory) - C()
+                            \ D()
+
+As A has "cpu" and "memory" enabled, A will control the distribution
+of CPU cycles and memory to its children, in this case, B.  As B has
+"memory" enabled but not "CPU", C and D will compete freely on CPU
+cycles but their division of memory available to B will be controlled.
+
+As a controller regulates the distribution of the target resource to
+the cgroup's children, enabling it creates the controller's interface
+files in the child cgroups.  In the above example, enabling "cpu" on B
+would create the "cpu." prefixed controller interface files in C and
+D.  Likewise, disabling "memory" from B would remove the "memory."
+prefixed controller interface files from C and D.  This means that the
+controller interface files - anything which doesn't start with
+"cgroup." are owned by the parent rather than the cgroup itself.
+
+
+Top-down Constraint
+~~~~~~~~~~~~~~~~~~~
+
+Resources are distributed top-down and a cgroup can further distribute
+a resource only if the resource has been distributed to it from the
+parent.  This means that all non-root "cgroup.subtree_control" files
+can only contain controllers which are enabled in the parent's
+"cgroup.subtree_control" file.  A controller can be enabled only if
+the parent has the controller enabled and a controller can't be
+disabled if one or more children have it enabled.
+
+
+No Internal Process Constraint
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Non-root cgroups can distribute domain resources to their children
+only when they don't have any processes of their own.  In other words,
+only domain cgroups which don't contain any processes can have domain
+controllers enabled in their "cgroup.subtree_control" files.
+
+This guarantees that, when a domain controller is looking at the part
+of the hierarchy which has it enabled, processes are always only on
+the leaves.  This rules out situations where child cgroups compete
+against internal processes of the parent.
+
+The root cgroup is exempt from this restriction.  Root contains
+processes and anonymous resource consumption which can't be associated
+with any other cgroups and requires special treatment from most
+controllers.  How resource consumption in the root cgroup is governed
+is up to each controller (for more information on this topic please
+refer to the Non-normative information section in the Controllers
+chapter).
+
+Note that the restriction doesn't get in the way if there is no
+enabled controller in the cgroup's "cgroup.subtree_control".  This is
+important as otherwise it wouldn't be possible to create children of a
+populated cgroup.  To control resource distribution of a cgroup, the
+cgroup must create children and transfer all its processes to the
+children before enabling controllers in its "cgroup.subtree_control"
+file.
+
+
+Delegation
+----------
+
+Model of Delegation
+~~~~~~~~~~~~~~~~~~~
+
+A cgroup can be delegated in two ways.  First, to a less privileged
+user by granting write access of the directory and its "cgroup.procs",
+"cgroup.threads" and "cgroup.subtree_control" files to the user.
+Second, if the "nsdelegate" mount option is set, automatically to a
+cgroup namespace on namespace creation.
+
+Because the resource control interface files in a given directory
+control the distribution of the parent's resources, the delegatee
+shouldn't be allowed to write to them.  For the first method, this is
+achieved by not granting access to these files.  For the second, the
+kernel rejects writes to all files other than "cgroup.procs" and
+"cgroup.subtree_control" on a namespace root from inside the
+namespace.
+
+The end results are equivalent for both delegation types.  Once
+delegated, the user can build sub-hierarchy under the directory,
+organize processes inside it as it sees fit and further distribute the
+resources it received from the parent.  The limits and other settings
+of all resource controllers are hierarchical and regardless of what
+happens in the delegated sub-hierarchy, nothing can escape the
+resource restrictions imposed by the parent.
+
+Currently, cgroup doesn't impose any restrictions on the number of
+cgroups in or nesting depth of a delegated sub-hierarchy; however,
+this may be limited explicitly in the future.
+
+
+Delegation Containment
+~~~~~~~~~~~~~~~~~~~~~~
+
+A delegated sub-hierarchy is contained in the sense that processes
+can't be moved into or out of the sub-hierarchy by the delegatee.
+
+For delegations to a less privileged user, this is achieved by
+requiring the following conditions for a process with a non-root euid
+to migrate a target process into a cgroup by writing its PID to the
+"cgroup.procs" file.
+
+- The writer must have write access to the "cgroup.procs" file.
+
+- The writer must have write access to the "cgroup.procs" file of the
+  common ancestor of the source and destination cgroups.
+
+The above two constraints ensure that while a delegatee may migrate
+processes around freely in the delegated sub-hierarchy it can't pull
+in from or push out to outside the sub-hierarchy.
+
+For an example, let's assume cgroups C0 and C1 have been delegated to
+user U0 who created C00, C01 under C0 and C10 under C1 as follows and
+all processes under C0 and C1 belong to U0::
+
+  ~~~~~~~~~~~~~ - C0 - C00
+  ~ cgroup    ~      \ C01
+  ~ hierarchy ~
+  ~~~~~~~~~~~~~ - C1 - C10
+
+Let's also say U0 wants to write the PID of a process which is
+currently in C10 into "C00/cgroup.procs".  U0 has write access to the
+file; however, the common ancestor of the source cgroup C10 and the
+destination cgroup C00 is above the points of delegation and U0 would
+not have write access to its "cgroup.procs" files and thus the write
+will be denied with -EACCES.
+
+For delegations to namespaces, containment is achieved by requiring
+that both the source and destination cgroups are reachable from the
+namespace of the process which is attempting the migration.  If either
+is not reachable, the migration is rejected with -ENOENT.
+
+
+Guidelines
+----------
+
+Organize Once and Control
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Migrating a process across cgroups is a relatively expensive operation
+and stateful resources such as memory are not moved together with the
+process.  This is an explicit design decision as there often exist
+inherent trade-offs between migration and various hot paths in terms
+of synchronization cost.
+
+As such, migrating processes across cgroups frequently as a means to
+apply different resource restrictions is discouraged.  A workload
+should be assigned to a cgroup according to the system's logical and
+resource structure once on start-up.  Dynamic adjustments to resource
+distribution can be made by changing controller configuration through
+the interface files.
+
+
+Avoid Name Collisions
+~~~~~~~~~~~~~~~~~~~~~
+
+Interface files for a cgroup and its children cgroups occupy the same
+directory and it is possible to create children cgroups which collide
+with interface files.
+
+All cgroup core interface files are prefixed with "cgroup." and each
+controller's interface files are prefixed with the controller name and
+a dot.  A controller's name is composed of lower case alphabets and
+'_'s but never begins with an '_' so it can be used as the prefix
+character for collision avoidance.  Also, interface file names won't
+start or end with terms which are often used in categorizing workloads
+such as job, service, slice, unit or workload.
+
+cgroup doesn't do anything to prevent name collisions and it's the
+user's responsibility to avoid them.
+
+
+Resource Distribution Models
+============================
+
+cgroup controllers implement several resource distribution schemes
+depending on the resource type and expected use cases.  This section
+describes major schemes in use along with their expected behaviors.
+
+
+Weights
+-------
+
+A parent's resource is distributed by adding up the weights of all
+active children and giving each the fraction matching the ratio of its
+weight against the sum.  As only children which can make use of the
+resource at the moment participate in the distribution, this is
+work-conserving.  Due to the dynamic nature, this model is usually
+used for stateless resources.
+
+All weights are in the range [1, 10000] with the default at 100.  This
+allows symmetric multiplicative biases in both directions at fine
+enough granularity while staying in the intuitive range.
+
+As long as the weight is in range, all configuration combinations are
+valid and there is no reason to reject configuration changes or
+process migrations.
+
+"cpu.weight" proportionally distributes CPU cycles to active children
+and is an example of this type.
+
+
+Limits
+------
+
+A child can only consume upto the configured amount of the resource.
+Limits can be over-committed - the sum of the limits of children can
+exceed the amount of resource available to the parent.
+
+Limits are in the range [0, max] and defaults to "max", which is noop.
+
+As limits can be over-committed, all configuration combinations are
+valid and there is no reason to reject configuration changes or
+process migrations.
+
+"io.max" limits the maximum BPS and/or IOPS that a cgroup can consume
+on an IO device and is an example of this type.
+
+
+Protections
+-----------
+
+A cgroup is protected to be allocated upto the configured amount of
+the resource if the usages of all its ancestors are under their
+protected levels.  Protections can be hard guarantees or best effort
+soft boundaries.  Protections can also be over-committed in which case
+only upto the amount available to the parent is protected among
+children.
+
+Protections are in the range [0, max] and defaults to 0, which is
+noop.
+
+As protections can be over-committed, all configuration combinations
+are valid and there is no reason to reject configuration changes or
+process migrations.
+
+"memory.low" implements best-effort memory protection and is an
+example of this type.
+
+
+Allocations
+-----------
+
+A cgroup is exclusively allocated a certain amount of a finite
+resource.  Allocations can't be over-committed - the sum of the
+allocations of children can not exceed the amount of resource
+available to the parent.
+
+Allocations are in the range [0, max] and defaults to 0, which is no
+resource.
+
+As allocations can't be over-committed, some configuration
+combinations are invalid and should be rejected.  Also, if the
+resource is mandatory for execution of processes, process migrations
+may be rejected.
+
+"cpu.rt.max" hard-allocates realtime slices and is an example of this
+type.
+
+
+Interface Files
+===============
+
+Format
+------
+
+All interface files should be in one of the following formats whenever
+possible::
+
+  New-line separated values
+  (when only one value can be written at once)
+
+       VAL0\n
+       VAL1\n
+       ...
+
+  Space separated values
+  (when read-only or multiple values can be written at once)
+
+       VAL0 VAL1 ...\n
+
+  Flat keyed
+
+       KEY0 VAL0\n
+       KEY1 VAL1\n
+       ...
+
+  Nested keyed
+
+       KEY0 SUB_KEY0=VAL00 SUB_KEY1=VAL01...
+       KEY1 SUB_KEY0=VAL10 SUB_KEY1=VAL11...
+       ...
+
+For a writable file, the format for writing should generally match
+reading; however, controllers may allow omitting later fields or
+implement restricted shortcuts for most common use cases.
+
+For both flat and nested keyed files, only the values for a single key
+can be written at a time.  For nested keyed files, the sub key pairs
+may be specified in any order and not all pairs have to be specified.
+
+
+Conventions
+-----------
+
+- Settings for a single feature should be contained in a single file.
+
+- The root cgroup should be exempt from resource control and thus
+  shouldn't have resource control interface files.  Also,
+  informational files on the root cgroup which end up showing global
+  information available elsewhere shouldn't exist.
+
+- If a controller implements weight based resource distribution, its
+  interface file should be named "weight" and have the range [1,
+  10000] with 100 as the default.  The values are chosen to allow
+  enough and symmetric bias in both directions while keeping it
+  intuitive (the default is 100%).
+
+- If a controller implements an absolute resource guarantee and/or
+  limit, the interface files should be named "min" and "max"
+  respectively.  If a controller implements best effort resource
+  guarantee and/or limit, the interface files should be named "low"
+  and "high" respectively.
+
+  In the above four control files, the special token "max" should be
+  used to represent upward infinity for both reading and writing.
+
+- If a setting has a configurable default value and keyed specific
+  overrides, the default entry should be keyed with "default" and
+  appear as the first entry in the file.
+
+  The default value can be updated by writing either "default $VAL" or
+  "$VAL".
+
+  When writing to update a specific override, "default" can be used as
+  the value to indicate removal of the override.  Override entries
+  with "default" as the value must not appear when read.
+
+  For example, a setting which is keyed by major:minor device numbers
+  with integer values may look like the following::
+
+    # cat cgroup-example-interface-file
+    default 150
+    8:0 300
+
+  The default value can be updated by::
+
+    # echo 125 > cgroup-example-interface-file
+
+  or::
+
+    # echo "default 125" > cgroup-example-interface-file
+
+  An override can be set by::
+
+    # echo "8:16 170" > cgroup-example-interface-file
+
+  and cleared by::
+
+    # echo "8:0 default" > cgroup-example-interface-file
+    # cat cgroup-example-interface-file
+    default 125
+    8:16 170
+
+- For events which are not very high frequency, an interface file
+  "events" should be created which lists event key value pairs.
+  Whenever a notifiable event happens, file modified event should be
+  generated on the file.
+
+
+Core Interface Files
+--------------------
+
+All cgroup core files are prefixed with "cgroup."
+
+  cgroup.type
+
+       A read-write single value file which exists on non-root
+       cgroups.
+
+       When read, it indicates the current type of the cgroup, which
+       can be one of the following values.
+
+       - "domain" : A normal valid domain cgroup.
+
+       - "domain threaded" : A threaded domain cgroup which is
+          serving as the root of a threaded subtree.
+
+       - "domain invalid" : A cgroup which is in an invalid state.
+         It can't be populated or have controllers enabled.  It may
+         be allowed to become a threaded cgroup.
+
+       - "threaded" : A threaded cgroup which is a member of a
+          threaded subtree.
+
+       A cgroup can be turned into a threaded cgroup by writing
+       "threaded" to this file.
+
+  cgroup.procs
+       A read-write new-line separated values file which exists on
+       all cgroups.
+
+       When read, it lists the PIDs of all processes which belong to
+       the cgroup one-per-line.  The PIDs are not ordered and the
+       same PID may show up more than once if the process got moved
+       to another cgroup and then back or the PID got recycled while
+       reading.
+
+       A PID can be written to migrate the process associated with
+       the PID to the cgroup.  The writer should match all of the
+       following conditions.
+
+       - It must have write access to the "cgroup.procs" file.
+
+       - It must have write access to the "cgroup.procs" file of the
+         common ancestor of the source and destination cgroups.
+
+       When delegating a sub-hierarchy, write access to this file
+       should be granted along with the containing directory.
+
+       In a threaded cgroup, reading this file fails with EOPNOTSUPP
+       as all the processes belong to the thread root.  Writing is
+       supported and moves every thread of the process to the cgroup.
+
+  cgroup.threads
+       A read-write new-line separated values file which exists on
+       all cgroups.
+
+       When read, it lists the TIDs of all threads which belong to
+       the cgroup one-per-line.  The TIDs are not ordered and the
+       same TID may show up more than once if the thread got moved to
+       another cgroup and then back or the TID got recycled while
+       reading.
+
+       A TID can be written to migrate the thread associated with the
+       TID to the cgroup.  The writer should match all of the
+       following conditions.
+
+       - It must have write access to the "cgroup.threads" file.
+
+       - The cgroup that the thread is currently in must be in the
+          same resource domain as the destination cgroup.
+
+       - It must have write access to the "cgroup.procs" file of the
+         common ancestor of the source and destination cgroups.
+
+       When delegating a sub-hierarchy, write access to this file
+       should be granted along with the containing directory.
+
+  cgroup.controllers
+       A read-only space separated values file which exists on all
+       cgroups.
+
+       It shows space separated list of all controllers available to
+       the cgroup.  The controllers are not ordered.
+
+  cgroup.subtree_control
+       A read-write space separated values file which exists on all
+       cgroups.  Starts out empty.
+
+       When read, it shows space separated list of the controllers
+       which are enabled to control resource distribution from the
+       cgroup to its children.
+
+       Space separated list of controllers prefixed with '+' or '-'
+       can be written to enable or disable controllers.  A controller
+       name prefixed with '+' enables the controller and '-'
+       disables.  If a controller appears more than once on the list,
+       the last one is effective.  When multiple enable and disable
+       operations are specified, either all succeed or all fail.
+
+  cgroup.events
+       A read-only flat-keyed file which exists on non-root cgroups.
+       The following entries are defined.  Unless specified
+       otherwise, a value change in this file generates a file
+       modified event.
+
+         populated
+               1 if the cgroup or its descendants contains any live
+               processes; otherwise, 0.
+
+  cgroup.max.descendants
+       A read-write single value files.  The default is "max".
+
+       Maximum allowed number of descent cgroups.
+       If the actual number of descendants is equal or larger,
+       an attempt to create a new cgroup in the hierarchy will fail.
+
+  cgroup.max.depth
+       A read-write single value files.  The default is "max".
+
+       Maximum allowed descent depth below the current cgroup.
+       If the actual descent depth is equal or larger,
+       an attempt to create a new child cgroup will fail.
+
+  cgroup.stat
+       A read-only flat-keyed file with the following entries:
+
+         nr_descendants
+               Total number of visible descendant cgroups.
+
+         nr_dying_descendants
+               Total number of dying descendant cgroups. A cgroup becomes
+               dying after being deleted by a user. The cgroup will remain
+               in dying state for some time undefined time (which can depend
+               on system load) before being completely destroyed.
+
+               A process can't enter a dying cgroup under any circumstances,
+               a dying cgroup can't revive.
+
+               A dying cgroup can consume system resources not exceeding
+               limits, which were active at the moment of cgroup deletion.
+
+
+Controllers
+===========
+
+CPU
+---
+
+The "cpu" controllers regulates distribution of CPU cycles.  This
+controller implements weight and absolute bandwidth limit models for
+normal scheduling policy and absolute bandwidth allocation model for
+realtime scheduling policy.
+
+WARNING: cgroup2 doesn't yet support control of realtime processes and
+the cpu controller can only be enabled when all RT processes are in
+the root cgroup.  Be aware that system management software may already
+have placed RT processes into nonroot cgroups during the system boot
+process, and these processes may need to be moved to the root cgroup
+before the cpu controller can be enabled.
+
+
+CPU Interface Files
+~~~~~~~~~~~~~~~~~~~
+
+All time durations are in microseconds.
+
+  cpu.stat
+       A read-only flat-keyed file which exists on non-root cgroups.
+       This file exists whether the controller is enabled or not.
+
+       It always reports the following three stats:
+
+       - usage_usec
+       - user_usec
+       - system_usec
+
+       and the following three when the controller is enabled:
+
+       - nr_periods
+       - nr_throttled
+       - throttled_usec
+
+  cpu.weight
+       A read-write single value file which exists on non-root
+       cgroups.  The default is "100".
+
+       The weight in the range [1, 10000].
+
+  cpu.weight.nice
+       A read-write single value file which exists on non-root
+       cgroups.  The default is "0".
+
+       The nice value is in the range [-20, 19].
+
+       This interface file is an alternative interface for
+       "cpu.weight" and allows reading and setting weight using the
+       same values used by nice(2).  Because the range is smaller and
+       granularity is coarser for the nice values, the read value is
+       the closest approximation of the current weight.
+
+  cpu.max
+       A read-write two value file which exists on non-root cgroups.
+       The default is "max 100000".
+
+       The maximum bandwidth limit.  It's in the following format::
+
+         $MAX $PERIOD
+
+       which indicates that the group may consume upto $MAX in each
+       $PERIOD duration.  "max" for $MAX indicates no limit.  If only
+       one number is written, $MAX is updated.
+
+
+Memory
+------
+
+The "memory" controller regulates distribution of memory.  Memory is
+stateful and implements both limit and protection models.  Due to the
+intertwining between memory usage and reclaim pressure and the
+stateful nature of memory, the distribution model is relatively
+complex.
+
+While not completely water-tight, all major memory usages by a given
+cgroup are tracked so that the total memory consumption can be
+accounted and controlled to a reasonable extent.  Currently, the
+following types of memory usages are tracked.
+
+- Userland memory - page cache and anonymous memory.
+
+- Kernel data structures such as dentries and inodes.
+
+- TCP socket buffers.
+
+The above list may expand in the future for better coverage.
+
+
+Memory Interface Files
+~~~~~~~~~~~~~~~~~~~~~~
+
+All memory amounts are in bytes.  If a value which is not aligned to
+PAGE_SIZE is written, the value may be rounded up to the closest
+PAGE_SIZE multiple when read back.
+
+  memory.current
+       A read-only single value file which exists on non-root
+       cgroups.
+
+       The total amount of memory currently being used by the cgroup
+       and its descendants.
+
+  memory.low
+       A read-write single value file which exists on non-root
+       cgroups.  The default is "0".
+
+       Best-effort memory protection.  If the memory usages of a
+       cgroup and all its ancestors are below their low boundaries,
+       the cgroup's memory won't be reclaimed unless memory can be
+       reclaimed from unprotected cgroups.
+
+       Putting more memory than generally available under this
+       protection is discouraged.
+
+  memory.high
+       A read-write single value file which exists on non-root
+       cgroups.  The default is "max".
+
+       Memory usage throttle limit.  This is the main mechanism to
+       control memory usage of a cgroup.  If a cgroup's usage goes
+       over the high boundary, the processes of the cgroup are
+       throttled and put under heavy reclaim pressure.
+
+       Going over the high limit never invokes the OOM killer and
+       under extreme conditions the limit may be breached.
+
+  memory.max
+       A read-write single value file which exists on non-root
+       cgroups.  The default is "max".
+
+       Memory usage hard limit.  This is the final protection
+       mechanism.  If a cgroup's memory usage reaches this limit and
+       can't be reduced, the OOM killer is invoked in the cgroup.
+       Under certain circumstances, the usage may go over the limit
+       temporarily.
+
+       This is the ultimate protection mechanism.  As long as the
+       high limit is used and monitored properly, this limit's
+       utility is limited to providing the final safety net.
+
+  memory.events
+       A read-only flat-keyed file which exists on non-root cgroups.
+       The following entries are defined.  Unless specified
+       otherwise, a value change in this file generates a file
+       modified event.
+
+         low
+               The number of times the cgroup is reclaimed due to
+               high memory pressure even though its usage is under
+               the low boundary.  This usually indicates that the low
+               boundary is over-committed.
+
+         high
+               The number of times processes of the cgroup are
+               throttled and routed to perform direct memory reclaim
+               because the high memory boundary was exceeded.  For a
+               cgroup whose memory usage is capped by the high limit
+               rather than global memory pressure, this event's
+               occurrences are expected.
+
+         max
+               The number of times the cgroup's memory usage was
+               about to go over the max boundary.  If direct reclaim
+               fails to bring it down, the cgroup goes to OOM state.
+
+         oom
+               The number of time the cgroup's memory usage was
+               reached the limit and allocation was about to fail.
+
+               Depending on context result could be invocation of OOM
+               killer and retrying allocation or failing allocation.
+
+               Failed allocation in its turn could be returned into
+               userspace as -ENOMEM or silently ignored in cases like
+               disk readahead.  For now OOM in memory cgroup kills
+               tasks iff shortage has happened inside page fault.
+
+         oom_kill
+               The number of processes belonging to this cgroup
+               killed by any kind of OOM killer.
+
+  memory.stat
+       A read-only flat-keyed file which exists on non-root cgroups.
+
+       This breaks down the cgroup's memory footprint into different
+       types of memory, type-specific details, and other information
+       on the state and past events of the memory management system.
+
+       All memory amounts are in bytes.
+
+       The entries are ordered to be human readable, and new entries
+       can show up in the middle. Don't rely on items remaining in a
+       fixed position; use the keys to look up specific values!
+
+         anon
+               Amount of memory used in anonymous mappings such as
+               brk(), sbrk(), and mmap(MAP_ANONYMOUS)
+
+         file
+               Amount of memory used to cache filesystem data,
+               including tmpfs and shared memory.
+
+         kernel_stack
+               Amount of memory allocated to kernel stacks.
+
+         slab
+               Amount of memory used for storing in-kernel data
+               structures.
+
+         sock
+               Amount of memory used in network transmission buffers
+
+         shmem
+               Amount of cached filesystem data that is swap-backed,
+               such as tmpfs, shm segments, shared anonymous mmap()s
+
+         file_mapped
+               Amount of cached filesystem data mapped with mmap()
+
+         file_dirty
+               Amount of cached filesystem data that was modified but
+               not yet written back to disk
+
+         file_writeback
+               Amount of cached filesystem data that was modified and
+               is currently being written back to disk
+
+         inactive_anon, active_anon, inactive_file, active_file, unevictable
+               Amount of memory, swap-backed and filesystem-backed,
+               on the internal memory management lists used by the
+               page reclaim algorithm
+
+         slab_reclaimable
+               Part of "slab" that might be reclaimed, such as
+               dentries and inodes.
+
+         slab_unreclaimable
+               Part of "slab" that cannot be reclaimed on memory
+               pressure.
+
+         pgfault
+               Total number of page faults incurred
+
+         pgmajfault
+               Number of major page faults incurred
+
+         workingset_refault
+
+               Number of refaults of previously evicted pages
+
+         workingset_activate
+
+               Number of refaulted pages that were immediately activated
+
+         workingset_nodereclaim
+
+               Number of times a shadow node has been reclaimed
+
+         pgrefill
+
+               Amount of scanned pages (in an active LRU list)
+
+         pgscan
+
+               Amount of scanned pages (in an inactive LRU list)
+
+         pgsteal
+
+               Amount of reclaimed pages
+
+         pgactivate
+
+               Amount of pages moved to the active LRU list
+
+         pgdeactivate
+
+               Amount of pages moved to the inactive LRU lis
+
+         pglazyfree
+
+               Amount of pages postponed to be freed under memory pressure
+
+         pglazyfreed
+
+               Amount of reclaimed lazyfree pages
+
+  memory.swap.current
+       A read-only single value file which exists on non-root
+       cgroups.
+
+       The total amount of swap currently being used by the cgroup
+       and its descendants.
+
+  memory.swap.max
+       A read-write single value file which exists on non-root
+       cgroups.  The default is "max".
+
+       Swap usage hard limit.  If a cgroup's swap usage reaches this
+       limit, anonymous memory of the cgroup will not be swapped out.
+
+
+Usage Guidelines
+~~~~~~~~~~~~~~~~
+
+"memory.high" is the main mechanism to control memory usage.
+Over-committing on high limit (sum of high limits > available memory)
+and letting global memory pressure to distribute memory according to
+usage is a viable strategy.
+
+Because breach of the high limit doesn't trigger the OOM killer but
+throttles the offending cgroup, a management agent has ample
+opportunities to monitor and take appropriate actions such as granting
+more memory or terminating the workload.
+
+Determining whether a cgroup has enough memory is not trivial as
+memory usage doesn't indicate whether the workload can benefit from
+more memory.  For example, a workload which writes data received from
+network to a file can use all available memory but can also operate as
+performant with a small amount of memory.  A measure of memory
+pressure - how much the workload is being impacted due to lack of
+memory - is necessary to determine whether a workload needs more
+memory; unfortunately, memory pressure monitoring mechanism isn't
+implemented yet.
+
+
+Memory Ownership
+~~~~~~~~~~~~~~~~
+
+A memory area is charged to the cgroup which instantiated it and stays
+charged to the cgroup until the area is released.  Migrating a process
+to a different cgroup doesn't move the memory usages that it
+insta