stupid disks needed …

… that’s the solution to many problems! only the OS (and somehow the user that might be in charge here) would know the following:

–> what files need to be accessed quickly (in how many milliseconds) e.g. logfiles, config files, databases etc

if needed the OS could create several copies all over the disk (coherency problems!) so you can reach the file within a “short stroke” (make a note in RAM that IF you’re not busy all copies need updating)

–> what files need to have fast transfers (in bytes per second) e.g. for hibernating, video-editing (HD demands!)

look for some nice real estate around the outer cylinders, and do NOT allow remapping to ruin the dataflow

–> what files are important e.g. email, OS files etc so that the OS can make sure those are safe areas (NO read or ECC errors in the past, check frequently, make auto-backup copies someplace else)

a normal disk controller would not know anything about the quality of the data that gets sent to him. several categories need to be established, see above, e.g. transferspeed/reliability/accesstime with different weights so an intelligent choice could be made even if the disk is getting totally filled up

the OS needs to make the decision to remap or not, or how to use the remapped areas, e.g. for unimportant files that gets accessed only once in a while

disk controllers could be smaller, cheaper, and overall you get more flexibility and speed!

even SSDs would benefit: low speed, high capacity areas are cheaper (smaller) to make than high speed areas, so even within the same die you could seperate the two and offer the OS to choose