Thursday, October 23, 2008

Programming Language Generations

The well documented history of programming languages contains some that never existed which were then invented and aptly named as a result. Ada is one of those languages; although Ada Lovelace never saw a machine that Charles Babbage never built; she did write software for it that would in theory calculate Bernouli numbers, and has since been proven to work.(Baum)i Although Ada is a retronym (a term applied after the invention’s creation); there were many languages available and used before ADA, these include ALGOL, APT, FORTRAN and many others. (Piggot)ii

The logical history of software is available from the HOPL, a site devoted to the archival of all programming languages (Piggot)iii; although we could argue that the Jaquard Loom in the late eighteenth century (1890) was the first use of punch cards, and the first programming language to make use of recursion to produce fine rugs; the cards were actually pre-dated by switches when it comes to computers and programming.(Penn)iv

The reasons for abstracting programming language are very simple; machine language is impossible to think in; Humans usually think in the terms of the language they have been taught (New Scientist, 2004)v, not 1’s and 0’s. The amount of errors experienced by early computers during their first runs was incredible (cnet, 2008)vi; however a good portion of these errors were the result of a human whom does not understand how to program the machines memory, or mechanical issues; this paradigm has not changed much since the first days of computing, as we humans have not evolved much in the last 50 years; the languages used to program the raw data and computer hardware have.

Mechanical gears and vacuum tubes have given way to semiconductors; transistors, capacitors and resistors that are billionths of a human hair in size and consume a modicum of electricity whilst functioning. This almost three orders of magnitude increase in compute power has brought about its own issues; since there are no moving parts in a modern computer the potential for mechanical failure has been replaced by the potential for the failure in software: where the complexity arising from moving parts may be equated to lines of code.(Lew et al.)vii

The term debugging was invented at the U.S. Naval research centre at Harvard university when Rear Admiral Grace Murray Hooper noted issues with a program running on a Mark II computer; and then removed a moth from one of the relay’s coining the term debugging in the process.(Time, 1984)viii

The modern computer has a gargantuan amount of storage for both core memory and non-volatile memory when compared with the computers Grace Hooper used. To imagine how to program in machine language we would first have to consider our underlying systems architecture; this is as defined by the vendor of the system in question and is referred to as the Machine Language. (IBM, 1998)ix The most commonly sold personal computer today uses a machine language or instruction set architecture; referred to as x86 and x86-64; x86 has been around a long time; however as a language the 32 bit version has over 300 separate instructions for integer, floating point and other operations, combined with 32-bit width and register addresses and the terabytes for storage that 32-bits can address; the task quickly becomes impossible to program the system manually( Intel)x; although some circles still do (Wikipedia)xi; So programming languages arose as the result of machine language complexity.

Enter the “compiler”; software program to take and abstracted human readable programming language and turn it into machine useable code. The first compiler ever used was the NELIAC (HOPL)xii and it compiled ALGOL. It is assumed that the early compilers for Fortran, PASCAL and other languages were hand made that is designed to transform instructions by the hands of some very talented programmers; modern compilers are compiled by other compilers; That is a newer version of a compiler will utilize an older version of the same compiler to compile itself and once compiled may compile other software as needed; Examples include the GNU’s GCC compiler (GNU)xiii, or any commercially available compiler.

Programming paradigms have evolved in conjunction with the underlying technology accordingly; the current “Object-Oriented” paradigm evolved because we had developed computers that had enough resources to devote to a compiler to reduce the amount of time required to create software to run programs with our computers but sacrificing program effiencey. The previous common paradigm was a functional one for high level languages; such as ALGOL, FORTRAN or PL/1; the move to imperative programming languages was a logical transition from the functional paradigm since thinking in mathematical abstraction is more difficult than natural language; imperative languages like BASIC or SmallTalk were the result. The development of the Obejct-oriented paradigm is a result of using imperative object based programming languages, such as C and then once discovering their limitations in large programming environments developing C++ and the Object oriented standard to reduce programming time.

The measure of a computers overall efficiency is the time it takes the computer to do a given task; this is at the heart of benchmarking computer systems (University of Tennessee)xiv; however the time required to produce code is measured in man hours; (Anselmo et al.)xv, since the primary cost of developing software is the human developing it; the reduction of time spent by the programmer to create a program is money saved by the company making said software; and thus forms the basis of the Programming Language market, which is a large business.

Rapid Application Development and Extreme programming are methodologies that allow the development of application features and the reliability of said application to be of paramount importance in both its development and support; the current paradigm is shifting in favour of a methodology driven environments where project management methods determine controls for programming projects, and the underlying languages become less important. Current trends in programming include the move to interpretive, object-oriented, interactive & distributed programming languages such as Python, PHP, C# and .NET, although C/C++ and JAVA will be around for decades to come more a result of software market pressure than evolution: Where existing requirements stipulate that development time is minimal and the programs created serve a modular function such as Content Management Systems with native database functionality; we are also seeing trends in object oriented database access in addition to traditional variable storage methods; were applications are created with a database attached to them to improve performance and functionality. There is also the move online to standards such as AJAX(Opendirectory)xvi/SOAP (w3c)xvii where formal data interchange is supported by ubiquitous standards between unrelated third party organizations; the best example of this is Facebook, it’s programmed in PHP and uses AJAX and SOAP to allow third party companies to access people’s vital data. The social ramifications of this endeavor are still being rectified.

We have yet to produce a computer that can code it’s own programs; although LISP has many applications that can re-code their own functions and variables according goals and neural models(Luger)xviii we have yet to see an application that may produce programs itself as needed by objective alone; although many experiments have been completed, automating programming is one of the greatest challenges to the world of artificial intelligence, we can even purchase software that will aid in the modeling and selection & classification of unknowns but we cannot find applications that code themselves. (Ward Systems)xix

---------------------------------------------------------------------------------------------------------------
i Baum, Joan (Archon Books, December 1986). The Calculating Passion of Ada Byron.
ii Pigot, Diramud (Murdoch University, 2006) HOPL: an interactive Roster of Programming Languages, [Online] World Wide Web, Available From:
http://hopl.murdoch.edu.au/
(Accessed on October 20th 2008)
iii Pigot, Diramud (Murdoch University, 2006) HOPL: an interactive Roster of Programming Languages, [Online] World Wide Web, Available From:
http://hopl.murdoch.edu.au/
(Accessed on October 20th 2008)
iv Penn University (n.d.) Online Eniac Museam [Online] World Wide Web, Available From:
http://www.seas.upenn.edu/~museum/3tech.html
(Accessed on October 20th 2008)
v Biever, Celeste (August 19th 2004) Language May Shape Human Thought [Online] World Wide Web, Available From:
http://www.newscientist.com/article.ns?id=dn6303
(Accessed on October 20th 2008)
vi Kanellos, Michael (Cnet, February 13th 2006) Eniac a computer is Born [Online] World Wide Web,
Available From:
http://news.cnet.com/ENIAC-A-computer-is-born/2009-1006_3-6037980.html
(Accessed on October 20th 2008)
vii Lew, K.S. ; Dillon, T.S.; Foreward, K.E.(IEEE Transactions on Software Engineering, November 1988 Vol 14, no. 11 P1645 – 1655) Software Complexity and it’s Impact on Software Reliability, [Online] World Wide Web, Available From:
http://www2.computer.org/portal/web/csdl/doi/10.1109/32.9052
(Accessed on October 20th 2008)
viii Taylor, Alexander (Time April 16th 1984) The wizard inside the machine [Online] World Wide Web,
Available From:
http://www.time.com/time/printout/0,8816,954266,00.html
(Accessed on October 20th 2008)
ix Super Computer Education & Resarch Centre Indian Institute of Science (IBM, n.d.) Online Glossary of Terms [Online] World Wide Web,
Available From:
http://www.serc.iisc.ernet.in/ComputingFacilities/systems/cluster/vac-7.0/html/glossary/czgm.htm
(Accessed on October 20th 2008)
xSmith, Zack (Smith, 2005) The Intel 8086/8088/80186/80286/80386/80486 Instruction Set. [Online] World Wide Web, Available from:
http://home.comcast.net/~fbui/intel.html
(Accessed on October 20th 2008)
xi Wikipedia (n.d) Demo (Computer Programming), [Online] World Wide Web, Available From:
http://en.wikipedia.org/wiki/Intros#Intros
(Accessed on October 20th 2008)
xii Pigot, Diramud (Murdoch University, 2006) Navy Electronics Laboratory International ALGOL Complier [Online] World Wide Web Available From:
http://hopl.murdoch.edu.au/showlanguage2.prx?exp=32
(Accessed on October 20th 2008)
xiii Free Software Foundation (July 31st 2008) The GNU Compiler Collection (GCC) [Online] World Wide Web, Available From:
http://gcc.gnu.org/
(Accessed On October 20th 2008)
xiv Petitet, A; Whaley, R. C.; Dongarra, J. ; Cleary, A. (University of Tennesee, September 10 2008) HPL - A Portable Implementation of the High-Performance Linpack Benchmark for Distributed-Memory Computers, [Online] World Wide Web, Available From:
http://www.netlib.org/benchmark/hpl/
(Accessed On October 20th 2008)
xv Anselmo, Donald; Ledgard, Henry (ACM Proceedings, November 2003) Measuring Productivity in the Software Industry, [Online] PDF Document, Available From:
http://portal.acm.org/citation.cfm?doid=948383.948391
(Accessed on October 20th 2008)
xvi OpenDirectory (Netscape, 2007) Programming Lanaguages – AJAX ,[Online] World Wide Web
Available From:
http://www.dmoz.org/Computers/Programming/Languages/JavaScript/AJAX/
(Accessed on October 20th 2008)
xvii Box, Don; Ehenbuske, David; Kakivaya, Gopal; Layman, Andrew; Mendelshon, Noah; Nielson, Frystyk, Henrik; Thatte, Satish; Winer, Dave (W3C Note, May 2000) Simple Object Access Protocol (SOAP) 1.1, [Online] World Wide Web, Available From:
http://www.w3.org/TR/2000/NOTE-SOAP-20000508/
(Accessed on October 20th 2008)
xviii Phd. Luger, George (Pearson, 2009) Artificial Intelligence: Structures and Strategies for Complex Problem Solving, 6th Edition, [Online] PDF Document, Available From:
http://www.cs.unm.edu/~luger/ai-final/supplement-book-preface.pdf
(Accessed On October 20th 2008)
xix Ward Systems (Wardsystemsgroup, 2007) Advanced Neural Network and Genetic Algorithm Software Information Page [Online] World Wide Web Available From:
http://www.wardsystems.com/
(Accessed on October 20th 2008)

Thursday, October 2, 2008

How to increase storage subsystem speed (without increasing disk rotation)

The primary reason that physical disks are around two orders of magnitude slower than the registers and cache of a central processing unit is the simple fact that one relies on physical motion, whilst the other utilizes waves of electrons operating at speeds close to the speed of light. The only limiting factor for the speed of a wave of electrons traveling through a semi-conductor is the material and its fabrication size this is often referred to as the fabrication “Process” used. In turn the process produces the physical limitations of the solid state circuitry.

The theoretical mathematical limitation is described by Taylor and Wheeler (1992)i to be 2L/c where L is the average distance to the memory and c is the maximum celerity.


The primary limiting factor within a central processing unit is propagation delay as well as the distance to the physical memory. Since the next limitation is the core oscillator that acts as the CPU’s clock we see that there are no moving components and the physical limitations of the medium being used impose a speed limit to the clock itself. When room temperature superconductors become a reality, then the only limitation to compute speed will be the mediums permittivity, and information will travel at the speed of light (C ).


When a computing machine accesses physical storage a section of the memory mapped to the devices controller is used as the interface; that interface in turn has to utilize the Hardware Abstraction Layer and Operating System (including all of it’s dependencies) to locate the appropriate driver which then utilizes the native commands to send and receive information from core memory to non-volatile storage via the storage device controller. One order of magnitude of latency is introduced when the core memory is utilized; the second order of magnitude of latency is between the controller and the storage device itself.(Developer Shed, 2004)ii The average core memory latency is measured in microseconds the average hard drive latency is measured in the milliseconds. (Storage Review, 2005)iii


Current methods utilized to increase storage seek, read & write performance are being implemented by utilizing serial buses versus parallel ones(Dorian Cougias, 2003)iv, employing techniques such as command queuing (Patrick Schmidt, 2004)v to offer a type of “branch prediction” to the drive itself and increasing local storage buffer size on the drive so less time is spent physically looking for bits, as well as utilizing multiple drives in a RAID configuration to aggregate available I/O bound bandwidth by utilizing specialized hardware or software, this also includes “Storage Area Networks” which in essence are local drives put somewhere else with gargantuan amounts of input & output bandwidth between the drives and computers that use them.

Performance increases to operational systems latency involves two kinds of improvements, the first family require no hardware modification and fall into the category of software optimizations, the second kind require hardware configuration and architectural changes.


Here’s a list of known Optimizations that will improve storage latency within the Intel PC architecture that I have gained from my personal experience:


1. Utilize a file system layout that puts the most frequently accessed files closest to the spindle

2. Utilize an optimized chunk size within said file system and its application, this is a delicate endeavor, some debate its validity, vendors such as Google use GFS that has file chunks that are 64MB in size; conversely the default NTFS chunk is 64KB; GFS is optimized for web crawling & reliability on commodity hardware. (Ghemawat et al, 2003)vi

3. Ensure that the drive has a dedicated bus; for parallel ATA systems this meant purchasing extra controller hardware, it’s a standard in serial ATA and serial SCSI storage controllers.

4. Optimize all controller driver software to the most current stable version available including the drives and the controller’s firmware.

5. Within the software driver offload as many storage calculations to dedicated systems hardware; these are usually part of the driver options or bios settings and may be implemented within either the system or the controller itself.

6. If multiple disks are available configure a RAID array; again depending on application, two drives connected in a raid 0 array may easily achieve twice the write and read performance; with half the reliability!

7. If a page file is used ensure it’s a static otherwise unless required remove the page file.


Now for architectural changes that would improve storage latency; to increase overall system performance and reduce the amount of primary latency involved the speed of the front side bus or the core memory bandwidth is the first place all systems vendors work to improve. Increasing the available core memory bus bandwidth and I/O latency improves storage latency as it’s the first caveat within systems architecture. Architectural changes are relatively expensive when compared with optimizations and usually take time to implement as the storage vendor consortium must adopt them as manufacturing standards; thus unless they are developed by that consortium or these optimizations are more cost effective than a current technology they will not usually get implemented in the public domain.



The second place to increase available Input output (I/O) bandwidth is the system drive’s bus itself. The current speed of Serial ATA is 300MBs/3Gbps this is achieved across a four pin serial interface, the next generation of serial ATA will be capable of 600MB/s or 6.0Gbps(Leroy Davis, 2008)vii, the increased bus speed will require that future drives have larger local buffer memories and better command queuing. So the next methods to increase drive performance without modifying disk’s rotational speed are as follows:


1. Increase aerial storage density and reduce platter diameter, utilize new magnetic substrates with higher potential well resolutions and smaller drive heads.

2. Increase the drive’s communications buffer size, preferably by orders of 2 (128MB, 256MB, 512MB…), thus reducing the amount of seek, read and write commands actually issued to the platter.

3. Increase the drive controller’s input/output bandwidth on both sides of the controller, eg: from the drive to the controller and from the controller to core memory via the driver and operating system; including increasing the controller’s bus clock rate.

Although this discussion question refers to physical disk storage, there is a trend emerging for non-volatile solid state storage based upon NAND flash technology; IBM also has an organic substrate that has shown promise with areal densities far higher than physical disks entitled the millipede.(Vettiger et al, 1999)viii although advanced concepts such as holographic storage, AFM storage and others have been around for a long time they are yet to be cost effective enough to be adopted as non-volatile storage solutions by industry.


i Edwin F. Taylor, John Archibald Wheeler (1992) Spacetime Physics, 2nd ed. United States: W.H. Freeman & Co


ii Jkbaseball, Developer Shed (2004-11-30) Effects of Memory Latency [Online] World Wide Web, Available From:

http://www.devhardware.com/c/a/Memory/Effects-of-Memory-Latency/

(Accessed on Oct 1st 2008)


iii Charles M Kozierok, Storage Reivew (2005) The PC Guide – Latency [Online] World Wide Web, Available From:

http://www.storagereview.com/guide2000/ref/hdd/perf/perf/spec/posLatency.html

(Accessed on Oct 1st 2008)


iv Dorian Cougias, Search Storage (2003) The advantages of Serial ATA over Parallel ATA [Online] World Wide Web, Available From:

http://searchstorage.techtarget.com/tip/1,289483,sid5_gci934874,00.html

(Accessed on Oct 1st 2008)

v Patrick Schmid, Toms Hardware (Nov 16 2004) Can Command Queuing Turbo Charge SATA Hard Drives? [Online] World Wide Web, Available From:

http://www.tomshardware.com/reviews/command-queuing-turbo-charge-sata,922.html

(Accessed on Oct 1st 2008)

vi Sanjay Ghemawat, Howard Gobioff, Shun-Tak Leung, 19 ACM Symposium on Operating Principles (Lake George New York, 2003) The Google File System [Online] World Wide Web, Available From:

http://labs.google.com/papers/gfs.html

(Accessed on Oct 2nd 2008)

vii Leroy Davis, Interface Bus (Sep 17 2008) Serial ATA Bus Description [Online] World Wide Web, Available From:

http://www.interfacebus.com/Design_Connector_Serial_ATA.html

(Accessed on Oct 1st 2008)

viii Vettiger et al, IBM Journal of Resarch and Development (1999),The millipede more than one thousand tips for the future of AFM data storage [Online] World Wide Web, Available from:

http://www.research.ibm.com/journal/rd/443/vettiger.html

(Accessed on Oct 2nd 2008)

Tuesday, September 30, 2008

Impact of increased storage space.

Yay, another week and another 3 assignments; here's my latest:

Since the invention of digital computing machines such as the Manchester Small-Scale Experimental Machine,(University of Manchester, 2005)[i] , non-volatile storage has been a requirement as a result of the stored-program concept.(Brookshear et al, 2009, p.102)[ii] As outlined in sections 2.1 to 2.4 of the above text; Machine language in conjunction with computer systems architecture is what is used to accomplish work within any digital processing system; although the machine language, systems interface buses, volatile memory, ie; Random Access Memory (Ram), and non-volatile storage mechanisms may differ from system to system, the one requirement all computing machines maintain is non-volatile storage accessed via a controller. (Brookshear et al, 2009, p.123 – p.125)[iii]

The first paradigm of non-volatile storage utilized was paper tape followed by punch cards followed by cathode ray tubes and eventually magnetic tape, magnetic drums and our current paradigm of magnetic platters.(Computer History Museum, 2006)[iv]

Each shift within the non-volatile storage paradigm increased the amount of storage and thus increased the overall functionality of the available computing systems, since they could operate with faster Input and Output as well as being able to work with ever larger data-sets. The increase in size and flexibility was not without sacrifice, as the impermanence of the current magnetic systems as well as their reduced size has lead to many ethical concerns regarding data disposal and management, as news articles commonly include the loss and potential theft of information on modern hard drives.(BBC News, 2007)[v].

Generally the increase in available storage space has led to larger operating systems; more abstracted programming languages which require less programming time and faster overall systems operation, oddly enough Moore’s law also applies to storage as well as integrated circuitry.

The current shift in paradigm to NAND based flash memory is one of cost, speed, power consumption and size. Current hard disk systems utilize technologies such as Perpendicular Bit Recording (PBR), with Gigantic Magnetorisitive Heads (GMR) to achieve Arial densities up to tens of gigabytes-bytes per square centimeter. (Hitachi, 2007)[vi] a 1 terra byte (1,000,000,000 bytes) drive may be purchased for around one hundred U.S. Dollars, that’s one dollar per gigabyte . (Pricewatch, 2008)[vii]

IBM has stated that as GMR head size becomes reduced to the physical limit the only available method to increase storage is to increase the platter count when utilizing a mechanical drive; hence the reason they sold their storage manufacturing arm to Hitachi.(EETimes India, 2008)[viii]. The limitations of a physical system containing moving parts are that of latency, even though current drives may spin anywhere between 7200RPM and 15,000RPM the motor spinning the platters consumes a large amount of power when compared with the requirements of non-volatile “flash” memory. The main trade off with the current trend in non-volatile flash based NAND memory is that the areal density is lower, the reliability is the same and the cost per gigabyte is more than double that of standard hard disk drives.(ACSL, 2008)[ix]

Since storage has become a commoditized market the motivating factor behind the adoption of any new or existing technology is cost. The primary limitation of a computers use is a function of available computing power in conjunction with its available data set; increase the dataset and the breadth of function increases. This in turn translates into larger more functional applications and operating systems, although the cost of non-volatile flash memory is an order of magnitude higher for solid state systems and only offers a reasonable amount of storage for the same price in design when compared with the current standard of hard disk drives; flash memory’s benefits are lower power consumption, ruggedized systems operation and faster interface latency and throughput. Therefore the current reduction in price of non-volatile flash memory allows mobile applications with longer operational times for devices ranging from multi-media players such as the IPod to ultra thin and light portable computers such as the Mac Book Air. Ultimately this allows a systems designer to specify the storage type by operational environment and application cost sensitivity. Instead of fitting the application to the computer we now design the computer to meet the application.

The changes in data processing and storage interface require new faster bus technologies such as Serial ATA & Serial SCSI; as opposed to the previous parallel buses; these provide higher throughput and more complex controller systems with integrated optimizations like Native Command Queuing.

The architectural changes will allow future operating systems to be ten or more times larger than previous ones; this is due to the increased availability and decreased cost of storage space; which in turn this results in more options & applications for use by the end user. Once systems used to conduct data intensive tasks such as computer aided design, video editing, multi-media production as well as high definition media playback are now economically viable on an inexpensive desktop systems, laptop computers, inexpensive gaming platforms or even hand held devices.

The reduction in cost for storage has modified the way people watch and create movies; purchase and enjoy or produce music and the resulting increased complexity and cost of video games. Users generally use their personal computers to store vast amounts of information. Systems that were once limited to document production, web-browsing and the occasional video game now store entire libraries of Music and Movies. The increased reliance on this storage has brought about issues with personal privacy, and open international piracy of copyrighted works.

As the cost of storage continues to decrease it will allow us to modify the way we entertain ourselves and what kind of information we store; modifying our shopping habits and creating distributed computing environments like “folding @ home” which are theoretically the world’s most powerful supercomputer(Vijay Pand, 2008)[x]; this shift in paradigm has produced self published content often referred to as the Web 2.0(O’Reilly, 2005)[xi] were the data and information we have drives what and how we choose to consume . The increased risk of hardware failure has created larger impacts now it did twenty years ago, where as it once took a fire to destroy your music collection, home videos, books and articles of value including ancient software encoded on punch cards: Where as now the same destruction may be wrought by a single computer virus and an unwary end user.

[i] Manchester University (1998), 50th Anniversary of the Manchester Baby Computer [Online] World Wide Web, Available From http://www.computer50.org/ (Accessed September 30th 2008)

[ii] Glenn Brookshear (Pearson Addison Wesley, 2009) , Computer Science an Overview 10th ed. international. United Kingdom: Pearson Addison Wesley

[iii] Glenn Brookshear (Pearson Addison Wesley, 2009) , Computer Science an Overview 10th ed. international. United Kingdom: Pearson Addison Wesley

[iv] Computer History Museum (2006), Timeline of Computer History [Online] World Wide Web, Available From: http://www.computerhistory.org/timeline/?category=cmptr

(Accessed September 30th 2008)

[v] BBC News (2007), Millions of L-Driver Details lost [Online] World Wide Web, Available from: http://news.bbc.co.uk/2/hi/uk_news/politics/7147715.stm (Accessed Sep 30th 2008)

[vi] Hitachi (2007), Hitachi achieves nanotechnology mile stone for quadrupling terabyte hard drive [Online] World Wide Web, Available From: http://www.hitachi.com/New/cnews/071015a.html (Accessed Sep 30th 2008)

[vii] Pricewatch (2008), 3.5 sata 1tb listings [Online] World Wide Web, Available From:

http://www.pricewatch.com/hard_removable_drives/sata_1tb.htm (Accessed Sep 30th 2008)

[viii] EETimes India (2008), IBM, Qimoda, Macronix Plot storage tech roadmap [Online] World Wide Web, Available From: http://www.eetindia.co.in/ART_8800440589_1800009_NT_bd0e8ab4.HTM (Accessed September 30th 2008)

[ix] ACSL (2008), Flash Memory vs. HDD who will win?, [Online] World Wide Web, Available from: http://www.storagesearch.com/semico-art1.html (Accessed Sep 30th 2008)

[x] Vjay Pande and Stanford University (2008), Folding at home statistics page [Online] World Wide Web, Available from: http://folding.stanford.edu/English/Stats (Accessed Sep 30th 2008)

[xi] Tim O’Reilly (2005), Design Patterns and Business Models for the Next Generation of Software, [Online] World Wide Web, Available From: http://www.oreillynet.com/pub/a/oreilly/tim/news/2005/09/30/what-is-web-20.html (Accessed Sep 30th 2005)

Monday, March 17, 2008

What ever happned to my Privacy?

A long long time ago in a galaxy far far away were a bunch of dudes that thought your rights were worth something. They have long since been dead, here in Canada we still have some privacy advocates but their ideas are eroding as well. We passed an act similar to the Patriot act in the U.S. our "War on terror" is simply severely underfunded. The United Kingdom did the same thing, what's really odd is that theirs was sparked by train bombings.



A great man once said:

If we sacrifice liberty for security even temporarily, we deserve neither!



That's as true today as it was three hundred years ago. In the latest onslaught against the rights of an individual; Phoem in the United Kingdom is going to team up with BT, now I hate British Telecom for a number of reasons, one of them is that it's a federally subsidized monopoly on telecommunications. Any Monopoly on telecom is bad; I say let the companies play and kill one another and then the monopoly that comes out of that should be told to play nice and if it doesn't break it up too. Say whatever did happen to bell?



Actually I'm against federal regulations, anyone should be able to connect anything to any network with a minimal licensing fee to ensure standards compliance and that's it. Now you can't do that, in the united states it's a bit easier, in Canada it's neigh impossible without a huge hunk of money for the CRTC and in the UK it's against the LAW.




Another thing that I once read was a compelling argument by one of the anonymous lawyers over at Lex Situs Conflictus (upper right hand corner link), stating that packet shaping equated to in Canada an invasion of privacy since everything inside the frame beyond the data headers was the property of the originator and in Canada if your computer is talking to people than it's as good as you talking to people.




The issue is that now any government agency, or company for that matter may listen in on that conversation, initially this was to make sure you weren't planning to blow something up, now it's to sell you more crap you don't need.




Yet another reason to use Tor, or Adblock+ for Firefox.


Do it! Do it Now! Stick it to the man!

Websites need advertisements to create revenue right? Since they themselves all evolve into avenues for advertising screw em!

I will glady click on ads on sites I like from vendors i hate that link to thing's I'll never buy and that in turn hoses the company I don't like, the nice thing about tabbed brosing is I don't even have to look at the page.



As to those of you that would like to watch or listen in on my conversations i say; go right ahead, I have nothing to hide. And I tend to use a lot of encryption for regular shit so if you want to see how I asked my girl friend weather or not she wanted vex or beer last night you may have to crack some moderately weak encryption first.



Which brings me to my point, when did it become O.K. for the state to eavesdrop on it's people?



To those of you still clinging on to that myth of terror i submit to you this:
Don't Believe the Hype

Friday, February 29, 2008

Screw Gas, make your own sunlight! Mr. Fusion Here we come!

So yet another morning of generating reports has gone by, I came across IEC based Fusion in my last post, Electron Power Systems.

Then I thought, why not look into that idea? it's a viable form of low cost fusion, so what the hell right?

In my mind physicists are people that have energy complexes, they love putting huge amounts of energy into tiny spaces; say like a macho man that drives a really nice car, although science won't let you afford a nice car, but it just might save the planet. (tm)

Enter the Google talk with Doctor Robert W. Bussard, available here:
http://video.google.com/videoplay?docid=1996321846673788606

If you have the hour and thirty six minutes as well as a good command of the English, engineering and physics languages you may just understand what he's saying.

I'm not even going to attempt to digest it at the moment because I am still digesting what he means by B-fields (H/B magentic feilds?) and gyro-radii (radius of rotation of an electron?), of course this could be old time cyclotron & gyrotron & thyratron speak with a hint gaseous electronics experimentation creeping into it. I'm a layperson and this man reminds me of how lay of a layperson I am when it comes to real hardcore physics.

Symantecs aside, he has in essence proven that there's a cheap alternative to Tokomaks, now EPS the guys I posted about earlier are using an electron tube you know things we acclerate neutrons with in large radius cyclotrons to smash atoms? Dr. Bussard goes into the limitations of that configuration in this video (and his paper) due to "Force Equinoxes" as he calls them, with their relation to actually fusing hydrogen, however he uses the same hydrogen boron mix.

The Paper's here:
http://www.askmar.com/ConferenceNotes/2006-9%20IAC%20Paper.pdf

I also found this guy:
http://iecfusiontech.blogspot.com/2007/05/polywell-making-well.html

Whom is also on the same quest to understand polywell based IEC's however he mentions this paper on his site:
http://wwwsoc.nii.ac.jp/aesj/division/fusion/aesjfnt/Yoshikawa.pdf

Basically this is a huge ass source of neutrons on a small scale, however that device might be able to scale up to power generation standards.

The device itself is simple, based on the orginial design a sphere with a cage inside it to accelerate the ion's into a well utilizing current.

The EMC2 device (bussards company) utilizes magnetic fields to accomplish the "sphere".

I just thought I'd mention it here since Dr. Bussard died in August 2007 but his company EMC^2 has recieved funding, they need 200 Million to produce a radiation free form of sustainable fusion, he even left us with a list of people to do the job.

It's sad to think that had we taken all the money spent on war over the past decade and put it towards this technology, and say oncology we may just have saved the inventor of the worlds most efficient fusion device.

Once he is proved correct I hope that his children will accept his Nobel prize.

Thursday, February 28, 2008

Grow your gas!

In the millions of ideas that plague my clouded and hectic mind I've wondered about a lot of things. One of them is our ongoing energy crisis.

A good friend once told me that If i could find a way to access the frozen methane under the continental shelf I'd be rich a few billion times over since this is the worlds largest deposit of natural gas, it's just below a thousand feet of water a few hundred meters of rock and frozen.

Then there's the psudeo science crowd claiming that Maxwell, Planck and Einstien were wrong and that Tesla was right and that we can pull energy from static charges or the zero point.

ZPE does exist and it is real, the only effect that has been measured by physicists is the Casimir one.
http://en.wikipedia.org/wiki/Casimir_effect

This force is more akin to gravity than anything else, the brilliant folks at the university of St. Andrews in the united kingdom figured out how to levitate things with it, I posted about it earlier last year.

http://www.st-andrews.ac.uk/~ulf/levitation.html
Here's their page, above and some cool photography (not doctored) below.

http://www.dailygalaxy.com/my_weblog/2007/08/physicists-use-.html

In the world of science theoretical physics reigns king, I came across this site in the search of interesting reproductions of ball lightning:
http://www.electronpowersystems.com/

The sad part is their e-mail does not work, or needs to be changed since I wanted to start a dialouge with the owner, I seriously hope he's not a fake, for someone trained at westpoint I'd hope he's still around and that he's getting funding.

They claim to have found a way to blend toroidal plasmas to facilitate fusion. As in the only "Holy grail" of energy physics that promises to help humanity cease being Dependant on deposits of sunlight. Anyhow let's hope he gets his products to market and that he's not another "scientist-ion". They have enough weight to be examined by NASA's group for advanced concepts (NIAC) for a potential award. His method claims to have produced a self contained torrid of plasma utilizing an aerosolized form of Boron and Hydrogen, and that he stumbled upon this in 1992 as a result of studying ball lightning.

The truth is gas, oil, jet fuel the backbones of our daily lives are in the current form being exhausted. Whilst exhausting this we are also raising the global temperature, any fan of Al Gore and the inconvenient truth knows that the ecosystem and biosphere is tiny and fragile and we are choking ourself to death. Oddly enough the NREL in the united states has spent millions studying Algae, one of the funny aspects of this is that the following algae may be responsible for our current deposits of sunlight.

Algae and phytoplankton over the period of the lives of the seas from the pre-historic age have produced stored and left the energy that we currently use today . It's a natural bi-product of photosynthesis of various strains.

Botryococcus braunii - This slow growing algae traps phosphor and produces up to 805 of it's weight in hydrocarbons, it grows best at 23 degrees centigrade and at 60W/Meter square of sunlight. This algae doubles every 72 hours (that's slow for algae) and traps a number of rather toxic substances from the air, it's an excllent candidate for carbon sequestering, you can even process it into Kerosene, Benzine (Gas) Diesel and Low Pressure gas, it would be like a NO Sulfur content crude. It's just not economically viable to do this yet, when Crude hits $110 / barrel you'll see the green houses going up everywhere. A lot of current biologists believe that this algae is what made all of the oil so many years ago.

http://en.wikipedia.org/wiki/Botryococcus_braunii

Now you can't make your car run on this unless you have a cracking and refining plant close by and no one has made any of those since the 60's. in fact I have hear rumors that current oil producers and sales companies in North America pay people off that wish to produce more refineries, since that would increase supply and reduce their profit margins.

Heres something everyone can do though, bio-diesel can be made from algae ponds, they are the most cost effective way to produce bio-diesel, the best method of production in my mind is to enclose raceway ponds in green houses. One acre of green houses (or one acre of ponds) would produce around 100,000 L of algal oil which may be tansesterfied into diesel (B100) and glycerin. The strains that are ideal for this are listed in many places but I like this one:

http://en.wikipedia.org/wiki/Algae_fuel
According to them it would take around 3 Belgums or somthing a little larger than the state of Maine in aquaculture space to meet the current demand for fuel, it's only going to get worse.

Enter my favorite problem from Texas, Prymnesium Parvum. Or "Toxic Gold Algae" as it's called, due to it's inadvertent killing of fish.

http://en.wikipedia.org/wiki/Prymnesium_parvum
According to the guys at Oilalgae, it's got 22% to 38% lipid content, however the cautionary note is that it produces DMSP:
http://en.wikipedia.org/wiki/Dimethylsulfoniopropionate

This is actually beneficial to aquaculture when all you want is the fatty lipids from the algae, since the algae kills everything around it that way bio-reactor contamination is not an issue, and it has a high lipid content to boot, it has a bit of sulfur but nothing close to toxic levels when compared with crude pulled from Alberta.

Once you've got that your 1 Acre of green house could produce up to 100,000L of bio diesel / year. Raw biomass and algal oil can be cracked just like raw crude, in fact after processing you could actually use the algal cake left over (after removing the sulphur) as a feed, that is pelletize whatever isn't turned into glycerin or B100.

This brings me to my next question, why isn't anyone doing this? (they are actually) but it's happening very quietly and behind the scenes and the companies that are starting it are big.