内存文件系统使用
内存的速度足够快,那么在内存中开辟一个存储空间,挂在到特定分区,实现快速缓存的方案。
tmpfs 是一种虚拟内存文件系统, 它存储在 VM(virtual memory) 里面, VM 是由 Linux 内核里面的 VM 子系统管理,现在大多数操作系统都采用了虚拟内存(MMU)管理机制。
$ mount -t tmpfs -o size= 1024m tmpfs /mnt
- 优点
- 大小随意分配
- 大小根据实际存储的容量而变化
- 不指定size大小是物理内存的一半
- 读写速度超级快
- 缺点
- 断电内容消失
echo "tmpfs /mnt tmpfs size=1024m 0 0\n" >> /etc/fstab
既然做出了文件系统,就来测个速度吧,fio 配置文件如下。
[global]
ioengine=libaio
direct=0
thread=1
norandommap=1
randrepeat=0
runtime=60
ramp_time=6
size=1g
directory=/path
numjobs=16
iodepth=128
[read4k-rand]
stonewall
group_reporting
bs=4k
rw=randread
[read64k-seq]
stonewall
group_reporting
bs=64k
rw=read
[write4k-rand]
stonewall
group_reporting
bs=4k
rw=randwrite
[write64k-seq]
stonewall
group_reporting
bs=64k
rw=write
Jobs: 16 (f=0): [_(48),/(16)][-.-%][r=0KiB/s,w=13.0GiB/s][r=0,w=3411k IOPS][eta 01m:05s]
read4k-rand: (groupid=0, jobs=16): err= 0: pid=13736: Mon Jun 8 15:25:25 2020
read: IOPS=3534k, BW=13.5GiB/s (14.5GB/s)(16.0GiB/1187msec)
clat percentiles (nsec):
| 1.00th=[ 0], 5.00th=[ 0], 10.00th=[ 0], 20.00th=[ 0],
| 30.00th=[ 0], 40.00th=[ 0], 50.00th=[ 0], 60.00th=[ 0],
| 70.00th=[ 0], 80.00th=[ 0], 90.00th=[ 0], 95.00th=[ 0],
| 99.00th=[ 0], 99.50th=[ 0], 99.90th=[ 0], 99.95th=[ 0],
| 99.99th=[ 0]
cpu : usr=14.95%, sys=84.75%, ctx=1711, majf=0, minf=2049
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
issued rwt: total=4194304,0,0, short=0,0,0, dropped=0,0,0
latency : target=0, window=0, percentile=100.00%, depth=128
read64k-seq: (groupid=1, jobs=16): err= 0: pid=13752: Mon Jun 8 15:25:25 2020
read: IOPS=267k, BW=16.3GiB/s (17.5GB/s)(16.0GiB/981msec)
clat percentiles (nsec):
| 1.00th=[ 0], 5.00th=[ 0], 10.00th=[ 0], 20.00th=[ 0],
| 30.00th=[ 0], 40.00th=[ 0], 50.00th=[ 0], 60.00th=[ 0],
| 70.00th=[ 0], 80.00th=[ 0], 90.00th=[ 0], 95.00th=[ 0],
| 99.00th=[ 0], 99.50th=[ 0], 99.90th=[ 0], 99.95th=[ 0],
| 99.99th=[ 0]
cpu : usr=1.14%, sys=98.40%, ctx=1344, majf=0, minf=32784
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
issued rwt: total=262144,0,0, short=0,0,0, dropped=0,0,0
latency : target=0, window=0, percentile=100.00%, depth=128
write4k-rand: (groupid=2, jobs=16): err= 0: pid=13768: Mon Jun 8 15:25:25 2020
write: IOPS=1572k, BW=6141MiB/s (6439MB/s)(16.0GiB/2668msec)
clat percentiles (nsec):
| 1.00th=[ 0], 5.00th=[ 0], 10.00th=[ 0], 20.00th=[ 0],
| 30.00th=[ 0], 40.00th=[ 0], 50.00th=[ 0], 60.00th=[ 0],
| 70.00th=[ 0], 80.00th=[ 0], 90.00th=[ 0], 95.00th=[ 0],
| 99.00th=[ 0], 99.50th=[ 0], 99.90th=[ 0], 99.95th=[ 0],
| 99.99th=[ 0]
cpu : usr=11.57%, sys=88.18%, ctx=3745, majf=0, minf=2424
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
issued rwt: total=0,4194304,0, short=0,0,0, dropped=0,0,0
latency : target=0, window=0, percentile=100.00%, depth=128
write64k-seq: (groupid=3, jobs=16): err= 0: pid=13784: Mon Jun 8 15:25:25 2020
write: IOPS=254k, BW=15.5GiB/s (16.6GB/s)(16.0GiB/1033msec)
clat percentiles (nsec):
| 1.00th=[ 0], 5.00th=[ 0], 10.00th=[ 0], 20.00th=[ 0],
| 30.00th=[ 0], 40.00th=[ 0], 50.00th=[ 0], 60.00th=[ 0],
| 70.00th=[ 0], 80.00th=[ 0], 90.00th=[ 0], 95.00th=[ 0],
| 99.00th=[ 0], 99.50th=[ 0], 99.90th=[ 0], 99.95th=[ 0],
| 99.99th=[ 0]
cpu : usr=14.67%, sys=84.97%, ctx=1430, majf=0, minf=16
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
issued rwt: total=0,262144,0, short=0,0,0, dropped=0,0,0
latency : target=0, window=0, percentile=100.00%, depth=128
Run status group 0 (all jobs):
READ: bw=13.5GiB/s (14.5GB/s), 13.5GiB/s-13.5GiB/s (14.5GB/s-14.5GB/s), io=16.0GiB (17.2GB), run=1187-1187msec
Run status group 1 (all jobs):
READ: bw=16.3GiB/s (17.5GB/s), 16.3GiB/s-16.3GiB/s (17.5GB/s-17.5GB/s), io=16.0GiB (17.2GB), run=981-981msec
Run status group 2 (all jobs):
WRITE: bw=6141MiB/s (6439MB/s), 6141MiB/s-6141MiB/s (6439MB/s-6439MB/s), io=16.0GiB (17.2GB), run=2668-2668msec
Run status group 3 (all jobs):
WRITE: bw=15.5GiB/s (16.6GB/s), 15.5GiB/s-15.5GiB/s (16.6GB/s-16.6GB/s), io=16.0GiB (17.2GB), run=1033-1033msec
io带宽很高,64k 随机读的 IOPS 居然达到了惊人的 1572k。