---
We were wallowing in the muck (it's an election year, so give us a break) wondering what could pull us out of the morass (we do need to get our tops up and spinning). Well, those with oodles of bucks, unless they are extremely careless (read: stupid), find it easier to remove themselves (not exactly, but that's a big T 'truth' issue) from the mess (they leave the crap to others). And, big bucks now are wandering after 'apps' which are oriented, for the most part, toward personal devices (mobile ubiquity).
Aside: Most don't enjoy that opportunity. Someone used baseball to characterize those with large pockets, in the following sense. They categorized the person's success by their starting position. Ellison, of the relational thing, was from the batter's box. Some start from first base. Others already had a home run at birth. You know, most are not even in the ball park. Z of FB was on one of the bases.
So, big bucks HOPE to make more by chasing after dreams (illusions -- as in, chimera). There may be technical aspects to the process that brings things forth, yet one would think that appealing features are more the focus than outright computational prowess. Calling something an algorithm does not make it so (below).
Sidebar: another issues that we'll discuss: A. 3 years with Vista and no BSOD -- only 2 weeks with this newer thing - Windows 7 - and I already have one; B. using the recent version of a program from a famous company, I kept seeing the thing go back to the initial screen. Well, it turns out that this is the default error mode (more friendly than the BSOD?), but it does the unwind without leaving any hint (see below) of what was wrong. It's like going back to square one and starting over (ever heard of Sisyphus?). But, getting a top to spin is not trivial. C. ...
---
So, let's look at this by using the three concepts from the subject line. They can be thought to have a progressive association somewhat like the following describes.
- hint -- We can view this several ways but consider a math problem. Many times, there may be a hint about how to approach a solution. One could say that a hint is sufficient to the wise. Unfortunately, computers are not wise and need explicit instructions. That, folks, is both the boon and the bane. It's the former since we can (try to) know what went into some computation (only under certain restrictive assumptions dealing with things like side-effects, black-boxes, etc.). Another way to think of a hint is that it lays down markers, within a landscape, whose presence helps keep things moving toward an end. Let's face it. Smart folks do not like to follow explicit instructions more than once (why then do we put people into the position where they are, for the most part, automatons? --- stupid, from several sides). You know what? We cannot give a hint to a computer in the same way that we can smart folks (forget the so-called smart devices - dumb-ass'd things, people). Except, if a person is in-the-loop during a computational event of complicated nature, the person can nudge the system away from divergent tendencies (yes, we'll expand at large about this being one way out of the quagmire -- suggesting, of course, human talents not yet developed).
- heuristic - We could say rule-of-thumb, but it's to mean more than that limiting view. A heuristic view can be fairly powerful approaching, somewhat, the 'nudge' mentioned in the prior bullet. Why don't we hear people talking this, rather than the loud pronouncements about their algorithms? If anything, any decision logic would need to be heuristically based (using real data and analysis thereof) in a real world situation. Developments after Hilbert's question about a program to do all mathematics (Mathematica, and its ilk, notwithstanding) suggest such.
- algorithm -- There is no agreed-upon definition. But, one strong notion is that an algorithm has (ought to have) more rigor behind it than not. Now, looking at the various characterizations (thanks to Wikipedia editors) can be interesting. Knuth's view probably best fits my experience. Namely, we have this: finiteness, definiteness, input, output, effectiveness. Let's look at those. Finiteness. Some argue that we have this by the vary nature of our domains. Not so. I've seen too many things loop (everyday, operators get lost, for various reasons on the web, which requires redo). Combinations of non-finite spaces can look extremely large from certain viewpoints. Definiteness. Unless there are explicit user requirements, there'll always be a problem here. Convergence, through interest and use, may be the modus operandi, yet it approximates only. Input: thankfully, menu interfaces inhibit problem emergence. But, just a few days ago, someone was trying to show off his new device. And, he started his dialog as if a 'smart' person was on the other side of his Shannon experience. Hah! The thing barf'd. I've seen this too many times, folks. Output: Crap, just look at mangled messages out of supposed smart fixup and fat fingering. It's almost epochal. I've seen too much effort by people to demonstrate that their smart device is such. We dumb ourselves down, folks, in order for this to happen (many, many ways). Effectiveness: As measured by what? Showing off to others? I'll be impressed when we see some type of verification process in places (no, not just testing). However, this is a hard thing to do.
---
Reminder. See my remarks about Mr. Jobs' little demos from the earlier days. My thought still stands, despite IBM's success on Jeopardy.
Aside: Don't get me wrong. I love that the software frameworks have evolved immensely. It's nice to open the covers and see the progressive build upon what one hopes is a sound foundation. But, it took oodles of time, effort, and caring work to get here. How do we try to show the extreme tedium, and angst, that has been the case from the beginning? You know what happened, folks? The tide was turned so that computation can be faulty without any user kickback except for not buying. Or complaining? And, the vendor saying: sorry. HOW many millions (billions) of dollars of work has been lost through errors that might be of an egregious type if we could get under the kimono to know for sure? Well, losses are not finite for several reasons (cannot be counted, or accounted for, and the secondary effects are not measurable).
---
Oh, who with money (yeah, venture people, you) is even interested in the longer term problems?
01/23/2015 -- Software? Well, we are talking more than apps (latest craze). We are dealing with fundamental questions which, then, gives rise to normative issues in mathematics (and, by extension, to the computational).
01/05/2015 -- See header.
03/23/2014 -- SAT solvers as an example of large class of heuristics.
02/25/2014 -- Put in a context link, up top.
07/20/2013 -- So, today, found out that a new handle for algorithms is algos. Okay.
06/14/2013 -- Moshe's editorial at the Communications of the ACM discusses algorithms. Comments, and subsequent letters responding, to Moshe's thoughts indicate the wide range of opinion in the matter.
05/02/2013 -- This has been a popular post (second most popular), of late. Perhaps, it's the growing awareness of the ever-increasing gap between what we think that we know and what we actually know. Well, the theme of the blog is to look at these types of problems. Perhaps, we can bring something forward from the past (such as, we not learning Anselm's message). I mentioned it somewhere (I'll track it down), but the motivation for this post was in part hearing young guys brag about their algorithms in a public place (of course, weekend situation, so some latitude is allowed). That concept is bandied about these days without people giving the notion its proper respect.
10/05/2012 -- Related posts: Zen of computing (in part), Avoiding oops.
10/05/2012 -- IJCAI has a newer flavor. It, at one time, had a theoretic thrust, mainly. Need to re-acquaint myself with the conference (attended 1987 Milan, Italy 1991 Sydney, Australia 1993 Chambery, France 1995 Montreal, Canada). This talk from IJCAI-2011 relates to the theme of this post: Homo heuristicus.
Modified: 01/23/2015