Explain in a few sentences why the kernel panicked. You may find it useful to look up the stack trace (the sequence of %eip values printed by panic) in the kernel.asm listing.api
更改ide.c中的iderw函数,试了四五次,终于panic了ide
❯ make qemu qemu-system-i386 -serial mon:stdio -drive file=fs.img,index=1,media=disk,format=raw -drive file=xv6.img,index=0,media=disk,format=raw -smp 2 -m 512 xv6... cpu1: starting 1 cpu0: starting 0 lapicid 1: panic: sched locks 80103ca1 80103e12 80105a87 8010575c 801022b7 80100191 801014e5 8010155f 801037c4 8010575f
如下为执行顺序:trapasm.S: trapret
-> proc.c: forkret
-> fs.c: iinit
-> fs.c: readsb
-> bio.c: bread
-> ide.c: iderw
-> trapasm.S: alltraps
-> trap.c: trap
-> proc.c: yield
-> proc.c: sched
函数
可知在启动第一个用户进程时,执行到iderw()
时(推测是在sti()
后,cli()
前)发生定时器中断,而后进行调度,因为ncli
不为1,在sched()
里panic了ui
Explain in a few sentences why the kernel didn't panic. Why do file_table_lock and ide_lock have different behavior in this respect?this
不会panic的缘由多是由于acquire()
和release()
之间的时间过短了,都没来得及发生定时器中断线程
Why does release() clear lk->pcs[0] and lk->cpu before clearing lk->locked? Why not wait until after?code
可能会有以下状况,cpu0上的线程将lk->locked清零时,正在acquire()
等待的cpu1当即取到了lk,而后会更改lk->cpu和lk->pcs[0],而cpu0此时也在更改lk->cpu和lk->pcs[0],这就形成了数据竞争。orm
// Release the lock. void release(struct spinlock *lk) { if(!holding(lk)) panic("release"); lk->pcs[0] = 0; lk->cpu = 0; // __sync_synchronize();使得上面的代码对内存的操做与下面的代码不放在一块儿 // 这样可确保对临界区的访问不会在释放锁后 __sync_synchronize(); // lk->locked = 0 可能不是原子操做,因此用汇编 asm volatile("movl $0, %0" : "+m" (lk->locked) : ); popcli(); }