Prior Page     This Chapter    Prior Chapter






Conclusions


This third set of class notes has looked at ways in which random numbers, encryption, and hashing are all related. These ideas show how data protection and simulation, including random choices, can be approximated using a computer. These notes have given you a series of tools to help model things in the outside world.

These fundamental operations spawn the best algorithm for data retrieval that is known: hash searching. These notes have given you some basic hashing algorithms along with the format of some basic hash data structures to make use of the hashing ideas. Several alternative approaches to using the randomization of a key to help establish properties of it were given. You should think about these approaches until they become second nature, because getting to data efficiently is central to most programming problems.

In another sense this brings to an end the intellectual path we have traveled. We started by encoding information as bits, then we saw how those bit encodings could be used to represent anything that could be put in correspondence with a finite set of integers. Later we explored a computer to manipulate these bit encodings in operations which create new bit encodings, thus providing ourselves with a machine that does general symbol manipulations. Now we have seen how this can be used to do pseudo-random numbers, encryption, and fast data retrieval.

Note that much of what is important here is a violation of the types taught in Pascal. When we hash something we ignore its type and deal with its bit encoding. This would be nearly impossible to understand if you didn't understand the bit encoding of it in the first place. Often the algorithms use properties of a 32-bit word computer and bit operations on arithmetic quantities. This also can't be properly understood if one doesn't understand the basic structure of a computer like the Hermes machine.

Likewise to understand the generation of random numbers, it helps to understand what a bit is in information theory. One feels that equal probability and how a source of information puts the maximum information if the bit encoded probability of a 1 is exactly ½. How comforting it is to find that same idea recurring in random numbers. Where in a good random sequence the probability of each bit being 1 is again ½. Again one can feel comfortable when you feel the maximizing of entropy as a requirement on a hash function is to uniformly distribute keys across some integer range.

Learning when to use intellectual concepts like types, and information theory, and when to discard them takes effort, experience, and understanding. With effort, experience, and understanding comes wisdom. It takes wisdom to know when to use a mathematical concept, or when you should use common sense. Wisdom comes from that deep understanding of what is going on. I hope this set of notes have started you along that path.    








Prior Page     This Chapter    Prior Chapter


Copyright © 1995 Robert Uzgalis. All Rights Reserved.
Contact: buz@cs.aukuni.ac.nz