Klaus Holtz, Eric Holtz, Diana Kalienky
Autosophy, 602 Mason #305, San Francisco, CA 94108, USA
holtzk@autosophy.com, Tel. 415-834-16, www.autosophy.com
Abstract
Data processing computers may soon be eclipsed by a next generation of brain-like learning machines based on the "Autosophy" information theory. This will have a profound impact on communication and computing applications. Data processing computers are essentially adding or calculating machines that cannot find "meaning" as our own brains obviously can. No matter the speed of computation or the complexity of the software, computers will not evolve into brain-like machines. All that can be achieved are mere simulations. The basic problem can be traced back to an outdated (Shannon) information theory that treats all data items (such as ASCII characters or pixels) as "quantities" in meaningless bit streams. In 1974 Klaus Holtz developed a new Autosophy information theory, which treats all data items as "addresses." The original Autosophy research explains the functioning of self-assembling natural structures, such as chemical crystals or living trees. The same natural laws and principles can also produce self-assembling data structures, which grow like data crystals or data trees in electronic memories, without computing or programming. Replacing the programmed data processing computer with brain-like, self-learning, failure-proof "autosopher" promises a true paradigm shift in technology, resulting in system architectures with true "learning" and eventually true Artificial Intelligence.
1. Introduction
Data Processing Computers are essentially adding or calculating machines that during the last two centuries have evolved from mechanical devices into electronic computers. However, it has long been realized that our own brains do not behave like calculating machines or computers. Our self-learning, self-organizing brains require no programming or human supervision of internal functioning. Processes that are easy for a computer, such as numerical calculations, are very difficult for us. While, on the other hand, learning and pattern recognition, very easy for a human child, are exceedingly difficult for computers. Clearly, computers and our brains work according to entirely different principles. In 1948 Claude Shannon published "A mathematical theory of communication," [33] which defines "information" in binary digits (bit). His information theory now dominates both telecommunication and computing.
In 1974 Klaus Holtz formulated a new "Autosophy" information theory [31] [32], which defines "information" by data "content" or "meaning." In 1988 a hardware model of a self-learning "autosopher" was built to verify the theory [28] [30]. While self-learning, brain-like machines have remained in the laboratories, Autosophy data compression algorithms are widely used in virtually all Internet communications. A new bit universal data format was developed for Internet television to allow real time multimedia data communications via the Internet [4]. The same data formats can be used to store and retrieve real-time video and sound in digital archives. The new data formats are ideally suited to the Internet's packet switching environment, avoiding its Quality of Service (QoS) problems [7]. When self-learning Autosopher [1] finally emerge from the laboratories they will have a profound impact on the Internet and many other computing applications [5].Large corporations, as well as governments and libraries, are now building very large multimedia on-line databases and archives [3], both to aid current operations and preserve our cultural heritage. Rapid advances in technology are meanwhile producing new imaging and storage devices at a faster and faster rate. By the time new archiving systems become operational their underlying technology is often already obsolete. The Internet is meanwhile rapidly replacing conventional fixed bit rate channels with packet switching protocols. Real time video and sound transmission via the Internet, though, is very difficult because of the Internet's Quality of Service (QoS) problems and the need for data compression and encryption. Future information systems must combine enormous storage capacities with very fast access and low cost. The systems must be able to mix all multimedia data types (live video, still images, sound, text, and random bit files) in a universal data format that will not become obsolete when new storage devices or communication channels become available. The archives should be self-indexing and eventually communicate with us using grammatical spoken languages. Conventional computer technology simply cannot provide the means for building these interactive databases of the future.
The new systems will require enormous capacity, solid-state, non-volatile, content or random addressable memories. These storage devices must be small enough to fit into mobile robots and consume very little power so as to conserve limited resources. When robots finally evolve from mere information access terminals to physical interactions with human beings, then near-absolute reliability is essential even in cases of severe physical damage. A malfunctioning robot may cause great damage and even harm to human beings.
A Content Addressable Read Only Memory (CAROM) is now being developed that is printed on thin stainless steel foil using Thin-Film-Transistor (TFT) technologies [13]. A robot memory may consist of a spool of stainless steel foil the size of a roll of toilet paper. The memory spools would act like a capacitive load to recycle the energy in each access cycle, resulting in negligible energy consumption for solid-state, non-volatile memories. Two memory spools may be configured as a pair to provide self-checking, self-repairing, self-healing, and self-cloning memories. A memory error in one spool can be automatically repaired from the complementary memory spool to ensure virtually failure-proof operations.
Both the theoretical knowledge and the conceptual storage devices are now becoming available for building these brain-like systems of the future. The systems may absorb all human knowledge while communicating with us in grammatical languages. From there, it could be only small steps to intelligent self-learning robots and eventually true Artificial Intelligence.
2. The Evolution of Autosophy
A new information theory could have a revolutionary impact on communication, computing, data storage, and encryption [11]. It could eventually replace the programmed data processing computer with brain-like self-learning failure-proof electronic machines.
There are two competing information theories, the classical Shannon information theory, and the newer Autosophy information theory. In 1948 Claude Shannon published “A Mathematical Theory of Communication,” which defines “communication” as binary digits in meaningless bit streams. In 1974 Klaus Holtz developed a new Autosophy information theory, which defines “communication” according to data content or "meaning." Both theories can be used for data communication and computing, but yielding entirely different results.
The Autosophy theory evolved from theoretical research into self-assembling natural structures, such as chemical crystals, living trees, or societies [31]. The word "Autosophy" is a combination of two Greek words: autos (meaning self, as in automobile) and sophia (meaning knowledge or wisdom, as in philosophy). This can be translated as self-knowledge or the understanding of oneself. An "autosopher" is a self-learning entity that may be either electronic or biological. On June 17, 1974 Klaus Holtz realized that the same methods and principles produce self-assembling data structures in electronic memories. These data structures grow like data crystals or data trees in electronic memories to provide learning modes that strikingly emulate the learning modes of our own brains.
There are now seven known classes of self-learning "Omni Dimensional Networks," each
providing a different learning mode, including learning modes not available in our own brains. Some of these learning networks are already implemented in commercial applications, while others have been
simulated or are known only in theory. A self-learning Autosopher text database was built back in 1988 to verify the theoretical predictions. New applications, such as live Internet video [8] [9] or advanced lossless still image compression [2] [6] [10], are now being added at an accelerating rate.
0 Seed 1 0 R 2 1 O 3 2 S 4 3 E 5 2 B 6 5 O 7 6 T 8 2 O 9 8 T 10 1 E 11 10 D 12 10 A 13 12 D 14 13 Y Seed (0)
1 O (2) 1 E (10)
2 S (3)
2 B (5) 2 O (8) 10 A (12)
Tip 12 D (13) Tip Pointer Gate Address
Figure 1. An example of the "Serial" self-learning tree network (Patent 4,366,551)
SERIAL TREE NETWORK GENERATION ALGORITHM
MATRIX [ POINTER ] GATE ] (The MATRIX is a working register in the hardware.)
Start: Set POINTER = Seed (= 0)
Loop: Move the next input character into the GATE
If End Of Sequence (a SPACE character) then output the POINTER as a Tip code; Goto Start Else search the library for a matching MATRIX
If found then move the library ADDRESS where it was found to the POINTER; Goto Loop Else, if not found, then store the MATRIX into a next empty library ADDRESS;
Move the library ADDRESS where it was stored into the POINTER; Goto Loop
SERIAL TREE NETWORK RETRIEVAL ALGORITHM
MATRIX [ POINTER ] GATE ]
Start: Move the input Tip code into the POINTER
Loop: Use the POINTER as a library ADDRESS to fetch a next MATRIX from the library
Push the GATE into a First-In-Last-Out (FILO) stack
If the POINTER = Seed (= 0) then pull the output data string from the FILO stack; Goto Start Else Goto Loop
The serial network, shown in Figure 1, provides true mathematical "learning" according to the Autosophy information theory. A new unit of knowledge (engram) is created by new information (GATE), related to already established knowledge (POINTER), which may then create a new engram (ADDRESS) as an extension to what is already known. The process can be imagined like growing data trees or data crystals. A stored tree network consists of separate nodes, where each ADDRESS represents an engram of knowledge. The library ADDRESS is a mathematical equivalent to a point in omni
dimensional hyperspace. The content of each library ADDRESS is unique and can be stored only once. One cannot learn what one already knows.
The serial tree network starts growing from arbitrarily pre-selected SEED ADDRESSES. Data transmissions use “tip” codes, which are the node ADDRESSES at the final tip of the tree branches. Each transmitted tip ADDRESS code may represent any length data string. The data strings are later retrieved from the tip codes, in reverse order, by following the POINTER trail back to the SEED ADDRESS. In addition to the serial network, shown in Figure 1, there are six other known self-learning networks, each providing a different learning mode. All seven networks could be developed for data compression and encryption applications and advanced self-learning databases.
Serial networks store serial data sequences, such as text, sound, or serially scanned images [12]
[14] [15]. The algorithm was invented by Klaus Holtz in 1974 (Patent 4,366,551). A similar algorithm (LZ-78) was later developed by Jacob Ziv and Abraham Lempel in 1978 [29]. Most commercial
applications use the LZW code (Lempel Ziv Welch), a simplified variation invented by Terry Welch in 1984 [27]. Application examples include the V.42bis data compression standard in modems and the gif and tif formats used for lossless still image compression.
Parallel networks store images in hyperspace funnels, yielding very high image compression and fast access to archives. These networks are especially suitable for archiving and storage of imaging and video data. Machine vision is the ultimate application. [16 - 26].
Associative networks connect various networks into a system. They can, for example, connect questions to answers, text to images, or commands to execution sequences.
Interrelational networks provide grammatical language communication that could evolve into talking databases with speech input and output. Grammatical speech would be the ultimate method of communication between humans and machines.
Logical networks yield an advanced form of self-learning data processing with logical reasoning capabilities. They may evolve into intelligent robots.
Primary networks provide unstructured access to archives or databases through deductive reasoning and automatic indexing.
Hypertree networks promise true brain-like learning, which is currently being researched. This ongoing research may provide new learning modes in the future.
3. What is "Communication"?
The question “what exactly is communication?” can be answered in two very different ways, each
leading to entirely different technologies. In 1948 Claude Shannon published “A mathematical Theory of Communication,” which defines “communication” as binary digits. In 1974 Klaus Holtz developed a new Autosophy information theory, which defines "communication" according to data "content."
QUANTITIES THE VIDEO BIT RATE IS DEPENDENT ON THE VIDEO HARDWARE ONLY. (THE VIDEO CONTENT IS IRRELEVANT)VIDEO BIT RATE = ROWS * COLUMNS *RESOLUTION (BIT / PIXEL) * SCANNING RATE (FRAMES / SEC.)QUANTITIES
Figure 2.
Conventional Shannon data and video communication
Communication, according to the Shannon information theory, is mere data in a bit stream that has no "meaning." All data items (ASCII characters or Pixels) are regarded as "quantities," which are converted into binary digits (bit) for storage or transmission. A unit of communication is a binary digit, called a "bit," which may also provide a yes-no answer to a question. In television, for example, the video "information" or bit-rate is determined by the imaging hardware, i.e., screen size (the number of pixels on the screen), pixel resolution, and scanning rates. The video images actually shown on the screen are irrelevant. A totally random noise video image would require the same bit rate as a blank screen video image. The purpose of a communication is to "remove uncertainty" in the receiver. The higher the bit rate being transmitted, the higher the image quality becomes. Any attempt of reducing the bit rate through video compression will "increase uncertainty" and therefore cause image distortions or loss of resolution. The more the video images are compressed, the worse the image quality will become. Lossy video
compression methods (such as Cosine Transforms (JPEG, MPEG), Wavelets, or Fractals) only attempt to hide the distortions from the human observer. This method of communication was developed during an age of primitive telegraph and telephone communication. There is no known biological creature that actually communicates with "quantities" or binary digits. The video quality is determined by the bit rate, whether or not any improvement in the video quality is visible to the human eye. Data encryption is possible using bit scrambling, such as pseudo random number generators, which can be added as a separate feature. These codes can be broken, with certainty, by high speed computing and determined efforts.
VIDEO BIT RATE = MOTION AND COMPLEXITY
THE BIT RATE IS DEPENDENT ON THE VIDEO CONTENT ONLY (THE VIDEO HARDWARE IS IRRELEVANT) ADDRESSES ADDRESSES
Figure 3. Autosophy data and video communication
In Autosophy communications, in contrast, all data items (ASCII characters or pixels) are regarded as "addresses" that convey "meaning." Transmission bit rates are determined by data content. Information, in essence, is only that which is not already known by the receiver -- and only that portion of the data that can be perceived or reproduced by the receiver.
Video is transmitted in tiny pixel clusters, each representing motion and complexity in the
images. Each cluster (up to 16 full color pixels with 16bit/color resolution) is transmitted with a universal bit packet code to be inserted at any location in the output image. All communications are regarded as "addresses" acting as entry pointers to various knowledge libraries. A unit of communication is an "address" (tip), which may create a unit of knowledge (an "engram") in the receiver. The purpose of a communication is therefore to increase knowledge in the receiver, i.e., to teach it something. High "lossless" data compression is achieved by transmitting only that which is not already known to the receiver, i.e., that which is not already in the receiver's libraries. Additional compression is achieved by transmitting only the portions of the data that are actually perceptible or reproducible by the receiver.
4. Computer vs. Autosopher
Today's programmed data processing computer may soon be replaced by a next generation of brain-like self-learning autosopher. While data processing computers may not entirely disappear, their functions will be merged into the autosopher.
INPUT
DATA “QUANTITIES”
OUTPUT
DATA “QUANTITIES”
Figure 4. The programmed data processing computer
The data processing computer is essentially a blind adding or calculating machine. Its purpose is to combine input data items (ASCII characters or pixels), all regarded as "quantities," into useful output data items, according to a stored program. The computer cannot learn or comprehend the meaning of the data. No matter how much data is processed or stored, the computer itself will not get more intelligent. All intelligence is contained in the programming, which is limited by the intelligence of the human programmers. Artificial Intelligence in a computer will always remain a mere simulation.
HIGHER LEARNING
NETWORKS
ASSOCIATIVE (CONNECTION)
INTERRELATIONAL (LANGUAGE)
LOGICAL (COMPUTING)
PRIMARY (REASONING)
HYPERTREES (BRAIN-LIKE)
RANDOM / CONTENT ADDRESSABLE MEMORY (CAROM)
SELF-REPAIRING, SELF-HEALING, FAILURE-PROOF (DECAM) SENSOR DATA
“ADDRESSES”
SENSOR DATA
“ADDRESSES”
Figure 5. Self-learning, brain-like, failure-proof autosopher
The operations in an autosopher, in contrast, are based on self-growing data networks, which grow like data “crystals” or “trees” in electronic memories, without programming or outside supervision. All data items (ASCII characters or pixels) are regarded as "addresses" that have "meaning" and which point to entries in various types of knowledge libraries. Once the internal learning algorithms have been set up, there is no need for programming or supervision of the internal system function. An autosopher acts essentially like a "black box" to organize its own internal operations. All input data (addresses) define or create their own storage locations in the memory. There are seven known classes of self-
learning networks, each providing a different learning mode. Simple input networks, like the serial and the parallel networks, convert input data patterns into higher level codes, which are then applied to the higher learning networks for higher learning functions. All types of learning networks may share the same mass memory device, which can be made virtually failure proof. A first hardware autosopher was built in 1988 to verify the theoretical predictions.
5. A Universal Hardware-Independent bit Data Format
Data communication, data compression, data storage, and data encryption must all be integrated into communication and archiving systems. Real time data communication via the Internet, for example, is subject to the Internet's Quality of Service (QoS) problems, including unpredictable bit rates, packet latencies, transmission errors, packets being delivered out of sequence and even dropped altogether in a congested network. The future Internet and archives will require the simultaneous transmission of all multimedia communications, including live video with synchronized live sound, text, still images, and random data files. All these data types must be randomly mixed together and remain synchronized in the Internet's intermittent packet stream. The new bit data formats, shown in Fig.6, could make all future communications compatible. It was developed for the purpose of transmitting live video with
synchronized sound via the Internet, while avoiding its Quality of Service (QoS) problems. The data format may be used in all media, such as cellular telephone, satellites, radio, and the Internet.
The Autosophy information theory claims that all data communications can be defined by data "content" only. Hardware parameters (such as screen size, pixel resolution, and scanning rates for television) would become irrelevant. The universal data format could make all future communications compatible and virtually eliminate the Quality of Service (QoS) problems that currently occur when sending live video with synchronized live sound via the packet switching Internet. That would be a paradigm shift for communication and information systems.
AUTOSOPHY COMPRESSED TEXT RANDOM BIT,AUTOSOPHY STILL IMAGES AUTOSOPHY REAL TIME SOUND AUTOSOPHY REAL TIME VIDEO Hyperspace library Address (16 bit)1 0Screen Address of the Start pixel (20 bit) Spare
Type Red Green
Brightn.log Blue Character 4Character 5Character 6Character 3Index (8 bit)Character 1Character 20 1Still images All 16bit codes Data type 0 0Index (8 bit)Random bit files Payload 6 bytes +/- Amplitude log.Duration in 0.1 ms (16 bit)Library Address (16 bit)1 1Channel Spare Rotating index in 0.1 ms (16 bit)
Figure 6. Media-independent data formats using a universal hardware-independent bit code
A 2bit header defines the type and priority of the data packets. Real-time sound has the highest priority because any interruption in sound is particularly disturbing to the receiver. Real-time Autosophy video requires a lower priority because of its inherent resistance to packet latency and transmission errors.
Sound codes (11) transmit sound by cutting sound wave forms at the analog zero crossing point. Each bit code would represent a waveform in the sound stream. Sound codes must be randomly mixed with video codes to achieve synchronized sound in teleconferencing or television broadcast. There is no fixed relationship between the required number of sound and video codes. In Autosophy television, for example, there may be times when the video is changing rapidly with little sound, while at other times the video may move slowly with continuous sound. The bit-rate for sound transmission is determined by the sound content. Lower frequency simple sound, such as speech, would require fewer codes than higher frequency complex sound, such as music. Silence would require no code transmissions at all. Only sound that can be heard by the human ear needs to be transmitted.
Video codes (10) would each insert a small cluster of up to 16 full color pixels anywhere within the output image. Only moving portions of the video are transmitted to describe motion and complexity in the video. The bit cluster codes provide hardware-independent communication protocols. The video camera and monitor may both have entirely different image formats, image sizes, color resolution, or scanning rates and yet always remain both forwards and backwards compatible. This would allow television technology to evolve towards larger and larger screens and higher resolution, while using a universal media independent protocol. The new television system is ideal for the packet switching Internet environment since it avoids most of its Quality of Service (QoS) problems.
Text codes (01) use either 9bit or 18bit codes for compressed and encrypted text communication.
A 9bit code represents a single ASCII character, while an 18bit code represents a whole text word containing many characters. Autosophy text compression can achieve an average 3:1 compression ratio. More important is the built-in encryption. Virtually unbreakable security can be achieved when using private hyperspace encryption libraries. The system uses a pre-grown hyperspace library, which contains the most common words in a language.
Random bit codes (00) are used to transmit compressed still images or other random bit files from legacy formats. Autosophy compressed and encrypted still images are transmitted using only 16bit codes, which are hardware-independent to allow the transmission of any-sized images at any resolution. Random data types may be random bit codes, computer programs, library downloads, or any other unknown data formats. A 6bit "data type" field allows up to different data types or separate data files to be simultaneously transmitted and mixed in the same channel. An 8bit index is required because data packets may be received out of sequence on the packet switching Internet.
6. Failure-Proof Memories
The new Autosophy systems would require enormous capacity, non-volatile, Content or Random Addressable memories. The memory units must be small enough to fit into mobile robots and consume very little power so as to require no cooling and conserve the limited power of mobile robots. When Autosopher robots finally evolve from mere information access terminals to physical interactions with human beings, then near-absolute reliability is essential even in cases of severe physical damage. A malfunctioning robot may cause severe physical damage and injury to human beings.
A new Content Addressable Read Only Memory (CAROM) is being developed that is printed on thin foils using Poly-Silicon Thin-Film-Transistor (TFT) technologies. An archive memory may consist of very thin stainless steel foil, which is wound into a spool the size of a roll of toilet paper. Tiny thin-film transistors and printed wiring are deposited onto the foil through vacuum deposition in a continuous roll-to-roll industrial process yielding very inexpensive mass memories.
V+ADDR 2Diode Transistor DATA 1ADDR 1DATA 2
. Unlike in conventional memories, both the address and the data are programmable by setting tiny fuses, which are made to be either conductive or non-conductive. The fuses are programmed by the Autosophy learning algorithms. The input data selects or creates its own storage locations. There are several technologies to implement the non-volatile fuses: anti-fuses used in Field Programmable Gate Arrays (FPGA); FLASH memories; and transistors using Lead Lanthanum Zinc Titanium (PLZT)
ceramics. Both the "Address" input and the "Data" output may be 32bit wide. Each learning network node or memory word may be physically located anywhere in the memory device. This makes it possible to relocate defective memory nodes to other memory locations for automatic self-repair. The memory spools act like a pure capacitive load, recycling the energy in each access cycle, resulting in very low energy consumption.
Two CAROM memory spools may be configured as a pair to provide self-checking, self-
repairing, self-healing Dual Entry Content Addressable Memories (DECAM). The two identical CAROM spools are configured so that one operates as a Content Addressable Memory, while the other operates as a Random Addressable Memory. Both memory spools contain the same information, but in a
complementary format. A memory error in one unit can be automatically repaired from the
complementary memory unit.
Examples of dual redundant information storage are found in double ledger accounting and also in the DNA helix. In double ledger accounting, every transaction is recorded twice, as a gain and as a loss. Errors in one ledger can be corrected from the other ledger to obtain virtually error proof accounting. In biological DNA, information is stored in two strands wound together into a helix, where each strand contains the same information but in a complementary form. Special proteins can detect wrong pairings for correction.
The automatic self-repair facilities can also be used for rejuvenation and cloning of Autosopher robots. Removing one CAROM spool and replacing it with an empty spool will cause the robot to automatically restore the information from the remaining spool into the empty spool. The removed
CAROM spool may then be inserted into a second robot, together with an empty spool, to produce a robot clone with the same knowledge and “personality.” Rejuvenation involves double cloning in which an old memory unit is removed and replaced with an empty memory unit. Old robots can be rejuvenated without loss of information.
The new Autosophy methods may greatly improve all data communication and data storage.
7.1 A paradigm shift from hardware communication to universal content communication Data standards and bit rates in conventional (Shannon) communications are determined by data type and the hardware. For example, whenever television evolves toward larger screens and better image quality, a new standard is required, which will be incompatible with previous standards. Communication networks are caught in an endless cycle of introducing new data standards and upgrading old documents and files that were rendered incompatible. In Autosophy communication, in contrast, the bit rates are dependent on the data "content" (meaning) only. This universal data standard will not change with future evolution in the hardware. Data files and communication standards will not change and will remain both forwards and backwards compatible for the foreseeable future. Converting files from legacy protocols or operating systems to the new bit format would require simple software patches, or small chipsets for real time data such as live video or sound. This would be a paradigm shift in communication and data storage that would revolutionize all communications on the Internet. The new standards could be introduced slowly without interfering with existing communications.
7.2 Information theories: Shannon vs. Autosophy
The classical Shannon information theory was originally developed in an age of primitive telegraph and telephone communications. It converts all data into meaningless bit streams, which does not allow for lossless data compression. Data encryption uses bit scrambling, such as pseudo random number generators, resulting in encryption codes that can be broken through high speed computing and determined efforts. The Autosophy information theory, in contrast, was developed for natural or human compatible communications based on mathematical "learning" algorithms. This provides both very high lossless data compression and virtually unbreakable encryption. As suggested in this paper, communication based on the Shannon theory can and should be replaced by communication according to the Autosophy theory. This requires extensive research and development, but the result may revolutionize all forms of communication, especially via the Internet or the future global information grid.
7.3 Impact on the Internet's Quality of Service (QoS)
Communicating real time data, such as sound and video, via the packet switching Internet is very difficult because of the Internet's Quality of Service problems, including unpredictable bit rates, packet latencies, transmission errors, packets delivered out of sequence, or packets being dropped in a congested network. Error re-transmission in the TCP/IP protocols does not work well for real time data, such as live video and sound, because of unpredictable re-transmission delays. Each type of data must be transmitted in a separate file, which cannot be used until the file transmission is completed. Mixing real time data such as video with synchronized sound is extremely difficult on the packet switching Internet. In addition, data compression and encryption is no longer an option but a necessity. Conventional Shannon communications would require expensive and continuous upgrading of the Internet's Quality of Service.
A new bit universal data format, in contrast, may solve most of the Internet's Quality of Service problems, while allowing for high data compression and built-in encryption.
compatibility
7.4 Media
Compatibility between different media will be a requirement of any future communication system. By using the universal bit data standard, real time video with synchronized sound and other data can be forwarded from radio, to cellular telephones, to satellites, and through the Internet without needing to be reformatted. Video communication would be possible between any media or terminal, using any protocol or operating system. Communication may range from cellular telephones, to laptop computers, all the way to wall-size giant television monitors. Many video images could be merged into a single coherent picture.7.5 Conventional lossy vs. Autosophy lossless compression
In conventional lossy image compression (JPEG, MPEG, DCT, Wavelets, Fractals) data compression is achieved only be sacrificing image quality. The more the images are compressed the worse the image quality becomes. Image distortions include blurring, blocking, jagged motion, and introduced image artifacts. Lossless Autosophy compression, in contrast, will not distort the images or introduce imaging artifacts. It offers both "lossless" image compression (in which each bit is precisely reproduced) and "visually lossless" compression (in which only that which can actually be seen by the human eye or reproduced on the monitor is transmitted). Visually lossless image compression can both greatly increase compression ratios and perceived image quality.
7.6 Sensitivity to transmission errors
Error sensitivity is a severe problem in conventional communication and video compression (JPEG, MPEG, DCT, Wavelets, Fractals). A single wrong bit or gaps in the transmission can cause the video image to break up into random (snow) noise. Autosophy video, in contrast, is extremely resistant to transmission errors. Wrong bits in the transmission, or missing portions in radio transmissions, will only cause tiny spots on the video screen to freeze when they were supposed to change. The video quality will therefore remain excellent even in very noisy transmission channels.
7.7 Secure communications and encryption
Encryption is required to authenticate communication partners and to prevent the interception of data by unauthorized users. Autosophy methods offer built-in virtually unbreakable "codebook" encryption using separate encryption libraries for each user or groups of users. Malicious system break-ins or attempts at deception can be instantly detected and tracked to their origin.
7.8 Programming vs. Education
Computers must be programmed by human programmers, with every operation specified in a complex series of instructions. Computer programming has become very complex and expensive. Further progress in computing may become more and more difficult because of the limited intelligence of human programmers. Autosopher, in contrast, are educated very much like human children. The information is absorbed in a "black box" where each input data item defines or creates its own storage locations. All human knowledge can thus be absorbed in archives without any supervision of internal operations. Robots may evolve towards higher and higher intelligence without being limited by human intelligence.
7.9 Communication formats, merging of knowledge
Computers treat all data items as "quantities" that have no meaning, and which can only be merged if all data is of the same kind and within compatible software systems. Because of a plethora of different data standards and software languages, existing archives cannot easily be merged. An Autosopher, in contrast, can retrieve all its stored knowledge in a "lesson" file, which can be communicated for merging with other Autosopher. Many different archives can exchange or merge knowledge in a universal format with only new information being learned by each archive. All archives would always remain compatible.
7.10 Database access speed
Access speed in a computer database is proportional to the amount of data stored in the databases. The larger the database becomes, the longer it will take to retrieve the information. Very large databases may have very slow access speeds. In Autosophy archives, in contrast, access speed is independent of the database size. A very large archive will have the same very fast access speed as a small database. This is because of the parallel search method of a Content Addressable Memory, in which all storage locations are accessed in parallel and searched at the same time. Increasing the access speed in the memory devices will have very little benefit to the system as a whole.7.11 Linear vs. Hyperspace information storage
A data processing computer database uses linear data storage. A doubling of the data volume (ASCII characters or pixels), for example, would require a doubling of the storage facilities. Autosopher archives, in contrast, store data in a saturating hyperspace mode in which each symbol or data item is stored only once. The more information already stored in the archive, the less additional storage space is required to store additional information. This provides very high "lossless" data compression and storage economy in very large archives. The larger the archives, the more efficient the information storage.
profiles
7.12 Reliability
Data processing computers are very fragile and unreliable. A single blown transistor, a wrong instruction execution due to noise, or a single bit memory error, can cause total system failures. Once a computer fails, its operations become very unpredictable. This can cause great damage and even loss of human life.
A computer can also be attacked by viruses introduced through its communication channels. Autosopher, in contrast, cannot be attacked by viruses. While it is possible to insert wrong information into an archive, such information will not interfere with operations. The Autosopher acts in essence like a "black box" to organize its own internal operations without human interference. Internal error checking in a self-repairing memory could make the Autosopher virtually failure proof.
8. Conclusion
Building the worldwide multimedia archives of the future will neither be quick nor easy. The new archives should combine virtually unlimited storage capacities, with near instant access speed, low cost, and high reliability. When the archives migrate to mobile robots, then small memory size, low power consumption, and near absolute reliability will become essential. Reliability must be sustained even after severe physical damage. The archives should eventually communicate with us like an electronic companion, using grammatical speech, with deductive and logical reasoning. The various archives should be able to communicate with each other via the Internet, sharing and constantly updating their stored information. This would require a universal hardware-independent data format able to tolerate the Internet's Quality of Service (QoS) problems. Real time data, such as live video with synchronized sound, must be seamlessly transmitted within the Internet’s intermittent packet stream. The worldwide archives should be accessible via wireless connections by millions of simultaneous users. This would require high tolerance to the noise and transmission errors found in radio and cellular telephone channels. Autosophy offers an alternative to conventional archiving that provides solutions to all those problems. Further evolution could lead to intelligent robots and eventually true Artificial Intelligence.
9. References
1 For information, tutorials, patents, and technical paper downloading use www.autosophy.com
For a larger search of Autosophy research use the keyword "autosophy" in your search engine.
2 K. Holtz, E. Holtz, D. Kalienky "Autosophy data/image compression and encryption". SPIE 49th
Annual Meeting. Optical Science and Technology. Mathematics of Data/Image Coding,
Compression and Encryption VII, with Application. Denver Colorado, Aug. 4, 2004
3 K. Holtz, E. Holtz, D. Kalienky "Autosophy Failure-Proof Multimedia Archiving". IS&T's 2004
Archiving Conference. www.imaging.org, San Antonio Texas, April, 2004
4 K.Holtz, E. Holtz, D. Kalienky “Universal Hardware-Independent Data Formats for Real-Time
Multimedia Communications.” CCCT’03 International Conference on Computer,
Communication and Control Technologies. July 2003, Orlando Florida. www.autosophy.com
5 K. Holtz, E. Holtz, D. Kalienky "Replacing Data Processing Computer with Self-Learning
Autosopher: Impact on Communication and Computing" SCI 2003 The 7th World Multi-
conference on Systemics, Cybernetics and Informatics. July 2003, Orlando FL.
6 K. Holtz, E. Holtz "Autosophy Still Image Compression" IS&T/SPIE Electronic Imaging
Science and Technology, Internet Imaging IV, SPIE Vol. 5018, Santa Clara, CA, Jan. 20, 2003.
7 K. Holtz, E. Holtz, "The Emerging Autosophy Internet" SSGRR 2002s, L'Aquila Italy Aug. 2
(2002) www.ssgrr.it/en/ssgrr2002s/papers.html, Paper 140 or www.autosophy.com
8 K. Holtz, E. Holtz. "Autosophy Internet Video" IS&T/SPIE Electronic Imaging Science
and Technology, Internet Imaging III, SPIE Vol. 4672-13,
San Jose, CA, Jan. 21, 2002. http://spie.org/Conferences or www.autosophy.com.
9 K. Holtz, E. Holtz, "An Autosophy Image Content-Based Television System".
IS&T's 2001 PICS Conf, Montreal Canada. www.imaging.org/store/epub.cfm?abstrid=44
10 K. Holtz, "Advanced Data Compression promises the next big Leap in Network Performance”
EUROPTO'98, SPIE 3408, Zurich, Switzerland (1998) www.autosophy.com
11 K. Holtz, "Autosophy Information Theory provides lossless data and video compression based on
the data content," EUROPTO'96 / SPIE-2952, Berlin Germany 1996, www.autosophy.com
12 U.S. Patent 5,917,948 "Image Compression with Serial Tree Networks"
13 U.S. Patent 5,576,985 "Content Addressable Read Only Memory" (CAROM)
14 K. Holtz, “Digital Image and Video Compression for Packet Networks”
SuperCon’96, Santa Clara CA. (1996), www.autosophy.com
15 K. Holtz, "Packet Video Transmission on the Information Superhighway using Image Content
Dependent Autosophy Video Compression," IS&T's48th, Washington DC 1995
16 Advanced Information Management WESCON/94, Session W23,
Sept. 29, 1994, 5 papers, Anaheim CA.
17 Digital Video Compression. WESCON/94, Session W9, Sept. 28, 1994, Anaheim CA.
18 K. Holtz, “Hyperspace storage compression for Multimedia systems”, IS&T / SPIE Electronic
Imaging Science and Technology, SPIE Vol. 2188-40, Feb. 8, 1994, San Jose CA
20 Applications for lossless data compression. WESCON/93. Sept. 28, 1993, San Francisco, CA. 19 K. Holtz, “Autosophy Networks yield Self-learning Robot Vision”.
WESCON/93, Session S2, Paper 5, San Francisco, CA, Sept. 28, 1993
20 K. Holtz, “Self-aligning and compressed Autosophy video databases”. Storage and Retrieval for
Image and Video Databases. SPIE Vol. 1908, 1993 San Jose, CA,
21 K. Holtz, “Lossless Image Compression with Autosophy Networks”. Image and Video
Processing, SPIE Vol. 1903, 1993 San Jose, CA,
22 K. Holtz, “Autosophy image compression and vision for aerospace sensing”.
SPIE-92, Vol. 1700-39, Orlando, FL, April. 24, (1992)
23 K. Holtz, “HDTV and Multimedia Image Compression with Autosophy Networks”
WESCON/92, Nov. 1992, Anaheim, CA,
24 K. Holtz, E. Holtz, “True Information Television (TITV) breaks Shannon Bandwidth Barrier”.
IEEE Transaction on Consumer Electronics, Volume 36, # 2, May 1990
25 K. Holtz, “Text Compression and Encryption with self-learning Networks”
IEEE GLOBECOM-85, New Orleans (1985)
26 U.S. Patent 5,992,868 "True Information Television (TITV) and Vision System"
27 T. Welch, "A Technique for High Performance Data Compression" IEEE Computer, June (1984)
28 K. Holtz. “Build a self-learning no programming Computer with your Microprocessor”
Dr. Dobbs Journal, Number 33, March 1979, Vol.4
29 J. Ziv, A. Lempel, "Compression of Individual Sequences via Variable-Rate Coding"
IEEE Information Theory, IT-24 (1978)
30 K. Holtz, E. Langheld, “Der selbstlernende und programmier-freie Assoziationscomputer”.
ELEKTRONIK magazin, Germany, Dec. 1978.
31 K. Holtz, "Here comes the brain-like self-learning no-programming computer of the future"
The First West Coast Computer Faire 1977
32 U.S. Patent 4,366,551 "Associative Search Method"
33 C. Shannon, "A mathematical Theory of Communication," Bell Telephone B-1598, July (1948)