Main technical specification and parameter of hard disk
We can see the capacity in two aspects: the total capacity and the capacity of one disk. The whole capacity is made up of each disk capacity. If we increase the disk capacity, we would not only improve the disk capacity, improve the speed of transmission, but also cut the cost down.
Rotate speed is the speed disk rotate. It is measured by RPM (Round Per Minute).The rotate speed of IDE hard disk are 5400RPM, 7200RPM etc.
Average Seek Time
The average seek time gives a good measure of the speed of the drive in a multi-user environment where successive read/write request are largely uncorrelated. Ten ms is common for a hard disk and 200 ms for an eight-speed CD-ROM.
The hard disk platters are spinning around at high speed, and the spin speed is not synchronized to the process that moves the read/write heads to the correct cylinder on a random access on the hard disk. Therefore, at the time that the heads arrive at the correct cylinder, the actual sector that is needed may be anywhere. After the actuator assembly has completed its seek to the correct track, the drive must wait for the correct sector to come around to where the read/write heads are located. This time is called latency. Latency is directly related to the spindle speed of the drive and such is influenced solely by the drive’s spindle characteristics. This operation page discussing spindle
speeds also contains information relevant to latency.
Conceptually, latency is rather simple to understand; it is also easy to calculate. The faster the disk is spinning, the quicker the correct sector will rotate under the heads, and the lower latency will be. Sometimes the sector will be at just the right spot when the seek is completed, and the latency for that access will be close to zero. Sometimes the needed sector will have just passed the head and in this “worst case”, a full rotation will be needed before the sector can be read. On average, latency will be half the time it takes for a full rotation of the disk.
Average Access Time
Access time is the metric that represents the composite of all the other specifications reflecting random performance positioning in the hard disk. As such, it is the best figure for assessing overall positioning performance, and you’d expect it to be the specification most used by hard disk manufacturers and enthusiasts alike. Depending on your level of cynicism then, you will either be very surprised or not surprised much at all, to learn that it is rarely even discussed. Ironically, in the world of CD-ROMs and other optical storage it is the figure that is universally used for
comparing positioning speed. I am really not sure why this discrepancy exists.
Perhaps the problem is that access time is really a derived figure, comprised of the other positioning performance specifications. The most common definition is:
Access Time = Command Overhead Time + Seek Time + Settle Time + Latency
The speed with which data can be transmitted from one device to another. Data rates are often measured in megabits (million bits) or megabytes (million bytes) per second. These are usually abbreviated as Mbps and MBps, respectively.
A small fast memory holding recently accessed data, designed to speed up subsequent access to the same data. Most often applied to processor-memory access but also used for a local copy of data accessible over a network etc.
When data is read from, or written to, main memory a copy is also saved in the cache, along with the associated main memory address. The cache monitors addresses of subsequent reads to see if the required data is already in the cache. If it is (a cache hit) then it is returned immediately and the main memory read is aborted (or not started). If the data is not cached (a cache miss) then it is fetched from main memory and also saved in the cache.
The cache is built from faster memory chips than main memory so a cache hit takes much less time to complete than a normal memory access. The cache may be located on the same integrated circuit as the CPU, in order to further reduce the access time. In this case it is often known as primary cache since there may be a larger, slower secondary cache outside the CPU chip.
The most important characteristic of a cache is its hit rate – the fraction of all memory accesses which are satisfied from the cache. This in turn depends on the cache design but mostly on its size relative to the main memory. The size is limited by the cost of fast memory chips.
The hit rate also depends on the access pattern of the particular program being run (the sequence of addresses being read and written). Caches rely on two properties of the access patterns of most programs: temporal locality – if something is accessed once, it is likely to be accessed again soon, and spatial locality – if one memory location is accessed then nearby memory locations are also likely to be accessed. In order to exploit spatial locality, caches often operate on several words at a time, a “cache line” or “cache block”. Main memory reads and writes are whole cache lines.
When the processor wants to write to main memory, the data is first written to the cache on the assumption that the processor will probably read it again soon. Various different policies are used. In a write-through cache, data is written to main memory at the same time as it is cached. In a write-back cache it is only written to main memory when it is forced out of the cache.
If all accesses were writes then, with a write-through policy, every write to the cache would necessitate a main memory write, thus slowing the system down to main memory speed. However, statistically, most accesses are reads and most of these will be satisfied from the cache. Write-through is simpler than write-back because an entry that is to be replaced can just be overwritten in the cache as it will already have been copied to main memory whereas write-back requires the cache to initiate a main memory write of the flushed entry followed (for a processor
read) by a main memory read. However, write-back is more efficient because an entry may be written many times in the cache without a main memory access.
When the cache is full and it is desired to cache another line of data then a cache entry is selected to be written back to main memory or “flushed”. The new line is then put in its place. Which entry is chosen to be flushed is determined by a “replacement algorithm”.
Some processors have separate instruction and data caches. Both can be active at the same time, allowing an instruction fetch to overlap with a data read or write. This separation also avoids the possibility of bad cache conflict between say the instructions in a loop and some data in an array which is accessed by that loop.
Noise & Temperature
It comes from motor. So motor is the key to reduce the noise and temperature. If you can keep the temperature of hard disk down, then you can keep your hard disk effective.
Data recovery Salon welcomes your comments and share with us your ideas, suggestions and experience. Data recovery salon is dedicated in sharing the most useful data recovery information with our users and only if you are good at data recovery or related knowledge, please kindly drop us an email and we will publish your article here. We need to make data recovery Salon to be the most professional and free data recovery E-book online.