Yay, another week and another 3 assignments; here's my latest:
Since the invention of digital computing machines such as the Manchester Small-Scale Experimental Machine,(University of Manchester, 2005)[i] , non-volatile storage has been a requirement as a result of the stored-program concept.(Brookshear et al, 2009, p.102)[ii] As outlined in sections 2.1 to 2.4 of the above text; Machine language in conjunction with computer systems architecture is what is used to accomplish work within any digital processing system; although the machine language, systems interface buses, volatile memory, ie; Random Access Memory (Ram), and non-volatile storage mechanisms may differ from system to system, the one requirement all computing machines maintain is non-volatile storage accessed via a controller. (Brookshear et al, 2009, p.123 – p.125)[iii]
The first paradigm of non-volatile storage utilized was paper tape followed by punch cards followed by cathode ray tubes and eventually magnetic tape, magnetic drums and our current paradigm of magnetic platters.(Computer History Museum, 2006)[iv]
Each shift within the non-volatile storage paradigm increased the amount of storage and thus increased the overall functionality of the available computing systems, since they could operate with faster Input and Output as well as being able to work with ever larger data-sets. The increase in size and flexibility was not without sacrifice, as the impermanence of the current magnetic systems as well as their reduced size has lead to many ethical concerns regarding data disposal and management, as news articles commonly include the loss and potential theft of information on modern hard drives.(BBC News, 2007)[v].
Generally the increase in available storage space has led to larger operating systems; more abstracted programming languages which require less programming time and faster overall systems operation, oddly enough
The current shift in paradigm to NAND based flash memory is one of cost, speed, power consumption and size. Current hard disk systems utilize technologies such as Perpendicular Bit Recording (PBR), with Gigantic Magnetorisitive Heads (GMR) to achieve Arial densities up to tens of gigabytes-bytes per square centimeter. (
IBM has stated that as GMR head size becomes reduced to the physical limit the only available method to increase storage is to increase the platter count when utilizing a mechanical drive; hence the reason they sold their storage manufacturing arm to Hitachi.(EETimes India, 2008)[viii]. The limitations of a physical system containing moving parts are that of latency, even though current drives may spin anywhere between 7200RPM and 15,000RPM the motor spinning the platters consumes a large amount of power when compared with the requirements of non-volatile “flash” memory. The main trade off with the current trend in non-volatile flash based NAND memory is that the areal density is lower, the reliability is the same and the cost per gigabyte is more than double that of standard hard disk drives.(ACSL, 2008)[ix]
Since storage has become a commoditized market the motivating factor behind the adoption of any new or existing technology is cost. The primary limitation of a computers use is a function of available computing power in conjunction with its available data set; increase the dataset and the breadth of function increases. This in turn translates into larger more functional applications and operating systems, although the cost of non-volatile flash memory is an order of magnitude higher for solid state systems and only offers a reasonable amount of storage for the same price in design when compared with the current standard of hard disk drives; flash memory’s benefits are lower power consumption, ruggedized systems operation and faster interface latency and throughput. Therefore the current reduction in price of non-volatile flash memory allows mobile applications with longer operational times for devices ranging from multi-media players such as the IPod to ultra thin and light portable computers such as the Mac Book Air. Ultimately this allows a systems designer to specify the storage type by operational environment and application cost sensitivity. Instead of fitting the application to the computer we now design the computer to meet the application.
The changes in data processing and storage interface require new faster bus technologies such as Serial ATA & Serial SCSI; as opposed to the previous parallel buses; these provide higher throughput and more complex controller systems with integrated optimizations like Native Command Queuing.
The architectural changes will allow future operating systems to be ten or more times larger than previous ones; this is due to the increased availability and decreased cost of storage space; which in turn this results in more options & applications for use by the end user. Once systems used to conduct data intensive tasks such as computer aided design, video editing, multi-media production as well as high definition media playback are now economically viable on an inexpensive desktop systems, laptop computers, inexpensive gaming platforms or even hand held devices.
The reduction in cost for storage has modified the way people watch and create movies; purchase and enjoy or produce music and the resulting increased complexity and cost of video games. Users generally use their personal computers to store vast amounts of information. Systems that were once limited to document production, web-browsing and the occasional video game now store entire libraries of Music and Movies. The increased reliance on this storage has brought about issues with personal privacy, and open international piracy of copyrighted works.
As the cost of storage continues to decrease it will allow us to modify the way we entertain ourselves and what kind of information we store; modifying our shopping habits and creating distributed computing environments like “folding @ home” which are theoretically the world’s most powerful supercomputer(Vijay Pand, 2008)[x]; this shift in paradigm has produced self published content often referred to as the Web 2.0(O’Reilly, 2005)[xi] were the data and information we have drives what and how we choose to consume . The increased risk of hardware failure has created larger impacts now it did twenty years ago, where as it once took a fire to destroy your music collection, home videos, books and articles of value including ancient software encoded on punch cards: Where as now the same destruction may be wrought by a single computer virus and an unwary end user.
[i]
[ii] Glenn Brookshear (Pearson Addison Wesley, 2009) , Computer Science an Overview 10th ed. international.
[iii] Glenn Brookshear (Pearson Addison Wesley, 2009) , Computer Science an Overview 10th ed. international.
[iv]
(Accessed September 30th 2008)
[v] BBC News (2007), Millions of L-Driver Details lost [Online] World Wide Web, Available from: http://news.bbc.co.uk/2/hi/uk_news/politics/7147715.stm (Accessed Sep 30th 2008)
[vi]
[vii] Pricewatch (2008), 3.5 sata 1tb listings [Online] World Wide Web, Available From:
http://www.pricewatch.com/hard_removable_drives/sata_1tb.htm (Accessed Sep 30th 2008)
[viii] EETimes India (2008), IBM, Qimoda, Macronix Plot storage tech roadmap [Online] World Wide Web, Available From: http://www.eetindia.co.in/ART_8800440589_1800009_NT_bd0e8ab4.HTM (Accessed September 30th 2008)
[ix] ACSL (2008), Flash Memory vs. HDD who will win?, [Online] World Wide Web, Available from: http://www.storagesearch.com/semico-art1.html (Accessed Sep 30th 2008)
[x] Vjay Pande and
[xi] Tim O’Reilly (2005), Design Patterns and Business Models for the Next Generation of Software, [Online] World Wide Web, Available From: http://www.oreillynet.com/pub/a/oreilly/tim/news/2005/09/30/what-is-web-20.html (Accessed Sep 30th 2005)