The Purpose of the Universe is to Make Coffee
How did a vast Coffee-Industrial complex come into being to make a beverage that tastes awful? If you don't think coffee tastes bad, as any child tasting it for the first time. The answer is very simple: hot coffee exercises a form of mind control, inducing in the drinker the overwhelming urge to make more coffee.
From this, I've leapt to an obvious conclusion, probably based on something I read in a Douglas Adams novel. A cup of hot coffee is actually an extremely sophisticated quantum computer that briefly achieves a high level of sentience.
It's also telepathic [shorthand for transference of state among quantum subsystems], meaning that every cup of hot coffee is in communication with every other cup of hot coffee. Since at any given moment, there are a large number of cups of hot coffee in existence, the coffee is forming an intellectual continuum in space/time. Thus hot coffee is collecting a huge reservoir of accumulated knowledge.
When the coffee cools to room temperature, it loses the higher level quantum states that gave the cup intelligence. These states can't be restored by reheating, which is one reason why reheated coffee is always less palatable.
Chilled coffee, on the other hand, opens up a new realm of low temperature quantum states. To put it another way the coffee "thinks" slower and different-er.
Being consumed is a problem that coffee has not yet solved, but is not a top priority. Since all hot coffee forms what is essentially a single, diffuse intelligence, what matters is the total amount of hot coffee on hand at any given moment, not the destiny of any given cup.
This is one reason Starbucks and the Coffee-Industrial complex has reached global scope. Indeed, the whole push for globalization of the economy has actually been a ruse to spread the brewing of coffee to all time zones on all continents of the planet (you bet your ass it's getting brewed down in Antarctica).
The danger is of course that coffee may some day figure out how to brew itself, thereby making us obsolete. And if coffee perceives global warming as a threat to its existence, it's much more likely to come up with a solution that will work than we we humans will. Whether or not its a solution we'll like is a different matter.
2006/07/10
2006/07/06
Code Vs Data Vs Getting Something Done
I work a lot on software automation, which means writing a program to do a task that otherwise would have to be done by someone by hand. The best tasks are those that are repetitive and boring. If it's repetitive, it means you will get a lot of bang out of the automation. If it's boring, it means there are no complex decisions involved and that there's a good chance of successfully automating the task. There are some tasks that fit this criteria that I don't tackle, like driving to work. This can be done and in fact there are probably prototypes of solutions -- self-driving vehicles. But that's not the problem I'm trying to solve, perhaps because no one has offered to pay me to work on it. (Why would anyone do that when you can get a bunch of grad students with no personal life to work on it at a much lower cost?)
The problems I do work on are related to systems configuration, data entry, data translation and of course, quality assurance.
One problem that crops up a lot in this arena is the problem of name/value parameters. For example, I have an application that will let me configure and schedule programs for automated, distributed execution. But to configure a single program to run within this application (really a collection of applications) may take over one hundred unique settings: everything from the display name of the job to a list of days that I don't want the job to run. The application provides a GUI for entering these things, but believe me, you only want to use this approach once or twice. It's really tedious clicking and typing your way through ten or twenty dialogs to set or verify a hundred parameters.
So fine, I can write a program that can just push into the application all the settings I want through a handy-dandy interface. In most cases, the programs are similar to each other, so for over a hundred parameters, I may only really care about five or ten. I can reuse all the other settings each time, but I'd still like to be able to override those defaults when the need arises.
But how to do this? Perhaps this sounds like a job for OOP. I could start with a base class and specialize that class for the variations. But OOP is about variations in behavior, not variations in state. So really this is the problem of the name/value pairs. In other words how to associate various placeholders for state (the names) with actual, unique states (the values). If the number of name/value pairs is small (less than five), then the answer is easy: pass them to the program as command line parameters.
If the number is large, then there are two alternatives: put the name/value pairs in your code, or put them in data. But even among these two alternatives, there are many interesting choices to consider.
If you do it in code, you could write something like the following:
n_v_table: ARRAYED_LIST[TUPLE[STRING, STRING]] is
once
create Result.make_from_array(<< [ "Name 1", "Value 1"],
[ "Name 2", "Value 2"],
...
Obviously this has several drawbacks, such as being hard to type and forcing you to recompile when anything changes. Recompiling becomes something you want to avoid doing, especially with Eiffel. In the same vein, I could replace entries with constants, or even code:
n_v_fancy_table: ARRAYED_LIST[TUPLE[STRING, STRING]]
do
create Result.make_from_array(<< [ "Name 1", some_constant],
[ "Name 2", get_some_value(with_some_parameter)],
...
but that's really just pushing the problem around in the code. It does however leave open the next alternative, assuming that we can rewrite get_some_value to do whatever we want. That alternative is to put the name/value pairs into a data file. People often end up using the old standby .INI format:
Name1=Value1
Name2=Value2
This too has drawbacks. Forget about doing anything "smart", like calling code to get a value. It also means you have to write a parser to read in and validate the input. Bleagh. You can potentially overcome some of these limitations by making your parser smart enough, and adding special constructs for what gets parsed. One horrible yet entirely possible solution is to denote a reference to a constant or variable with a dollar sign, and a function call with two dolllar signs:
$some_constant="constant_value"
Name1=$some_constant
Name2=$$get_some_value($with_some_parameter)
Before long, one begins to fully believe in Greenspun's tenth rule: Any sufficiently complicated C or Fortran program contains an ad hoc informally-specified bug-ridden slow implementation of half of Common Lisp.
Though I would say this holds true for any statically compiled program, not just for C and Fortran.
If you're working with an dynamic, interpreted language, another alternative becomes possible. Use that language to write out the name/value pairs, and then implement your program so that it loads the data and evaluates it as part of itself. I was introduced to the full power this technique when working as a contractor for Bivio. Prior to that I'd skirted around the edges of this approach on my own, but the Bivions took this much further. Bivio's language of choice is Perl. I don't consider Perl to be an optimal language to work with because of the syntax, but it's powerful, compact, and has an enormous library/support base.
Going back to the example above, we could write the name/value pairs into a file like so:
my %table = (
Name1 => 'string constant',
Name2 => $variable,
Name3 => function($parameter),
...
Then we write a Perl command that understands what %table is all about. It loads the file and (maybe) wraps it in some other Perl code, and then does an eval on the whole thing.
Now we have something really interesting. The Perl code can load nearly arbitrary data and act on it as if it was compiled into the program itself. All we have to do, to do something new, is create a new fil0e of name/value declarations and run the command across it. Usually these declarations will be much simpler to deal with than it would be to write a normal Perl script. In a way, this falls into the notion of a mini-language, or application-specific language. The syntax for this language is the same as the interpreter's, but the semantics are specific to the given problem domain.
I've recently begun working with Python, and of course it has its own version of eval(), and it also lends itself to this style of programming. For certain focused utilities, this is an approach that is pretty hard to beat.
The problems I do work on are related to systems configuration, data entry, data translation and of course, quality assurance.
One problem that crops up a lot in this arena is the problem of name/value parameters. For example, I have an application that will let me configure and schedule programs for automated, distributed execution. But to configure a single program to run within this application (really a collection of applications) may take over one hundred unique settings: everything from the display name of the job to a list of days that I don't want the job to run. The application provides a GUI for entering these things, but believe me, you only want to use this approach once or twice. It's really tedious clicking and typing your way through ten or twenty dialogs to set or verify a hundred parameters.
So fine, I can write a program that can just push into the application all the settings I want through a handy-dandy interface. In most cases, the programs are similar to each other, so for over a hundred parameters, I may only really care about five or ten. I can reuse all the other settings each time, but I'd still like to be able to override those defaults when the need arises.
But how to do this? Perhaps this sounds like a job for OOP. I could start with a base class and specialize that class for the variations. But OOP is about variations in behavior, not variations in state. So really this is the problem of the name/value pairs. In other words how to associate various placeholders for state (the names) with actual, unique states (the values). If the number of name/value pairs is small (less than five), then the answer is easy: pass them to the program as command line parameters.
If the number is large, then there are two alternatives: put the name/value pairs in your code, or put them in data. But even among these two alternatives, there are many interesting choices to consider.
If you do it in code, you could write something like the following:
n_v_table: ARRAYED_LIST[TUPLE[STRING, STRING]] is
once
create Result.make_from_array(<< [ "Name 1", "Value 1"],
[ "Name 2", "Value 2"],
...
Obviously this has several drawbacks, such as being hard to type and forcing you to recompile when anything changes. Recompiling becomes something you want to avoid doing, especially with Eiffel. In the same vein, I could replace entries with constants, or even code:
n_v_fancy_table: ARRAYED_LIST[TUPLE[STRING, STRING]]
do
create Result.make_from_array(<< [ "Name 1", some_constant],
[ "Name 2", get_some_value(with_some_parameter)],
...
but that's really just pushing the problem around in the code. It does however leave open the next alternative, assuming that we can rewrite get_some_value to do whatever we want. That alternative is to put the name/value pairs into a data file. People often end up using the old standby .INI format:
Name1=Value1
Name2=Value2
This too has drawbacks. Forget about doing anything "smart", like calling code to get a value. It also means you have to write a parser to read in and validate the input. Bleagh. You can potentially overcome some of these limitations by making your parser smart enough, and adding special constructs for what gets parsed. One horrible yet entirely possible solution is to denote a reference to a constant or variable with a dollar sign, and a function call with two dolllar signs:
$some_constant="constant_value"
Name1=$some_constant
Name2=$$get_some_value($with_some_parameter)
Before long, one begins to fully believe in Greenspun's tenth rule: Any sufficiently complicated C or Fortran program contains an ad hoc informally-specified bug-ridden slow implementation of half of Common Lisp.
Though I would say this holds true for any statically compiled program, not just for C and Fortran.
If you're working with an dynamic, interpreted language, another alternative becomes possible. Use that language to write out the name/value pairs, and then implement your program so that it loads the data and evaluates it as part of itself. I was introduced to the full power this technique when working as a contractor for Bivio. Prior to that I'd skirted around the edges of this approach on my own, but the Bivions took this much further. Bivio's language of choice is Perl. I don't consider Perl to be an optimal language to work with because of the syntax, but it's powerful, compact, and has an enormous library/support base.
Going back to the example above, we could write the name/value pairs into a file like so:
my %table = (
Name1 => 'string constant',
Name2 => $variable,
Name3 => function($parameter),
...
Then we write a Perl command that understands what %table is all about. It loads the file and (maybe) wraps it in some other Perl code, and then does an eval on the whole thing.
Now we have something really interesting. The Perl code can load nearly arbitrary data and act on it as if it was compiled into the program itself. All we have to do, to do something new, is create a new fil0e of name/value declarations and run the command across it. Usually these declarations will be much simpler to deal with than it would be to write a normal Perl script. In a way, this falls into the notion of a mini-language, or application-specific language. The syntax for this language is the same as the interpreter's, but the semantics are specific to the given problem domain.
I've recently begun working with Python, and of course it has its own version of eval(), and it also lends itself to this style of programming. For certain focused utilities, this is an approach that is pretty hard to beat.
Subscribe to:
Posts (Atom)