Wednesday, August 22, 2012

Virtual reality

Lots to catch up on, but this little article from the IEEE Spectrum is interesting. It's written by someone who admits that he made his money getting technology to make things more efficient. Now, he's reflecting on some of the lingering effects that may not be all good.

Somehow, expecting business to have a social conscience is a fool's dream. As we all know, there has been a bifurcation twixt elite and serf, covered here and the related blogs several times, that manifests itself as a gigantic gap between those who have and those who do not have (or have not so much).

But, that's a side issue, for this post. Yes, there is an appeal brought about by those little things (darlings that they are) that allow access (ubiquitously and without cessation) to that which is a mere web, yet it appears to be an endless reality. For some, it's about as close to abstraction as they're going to get. Hence, the pull.

If we taught people how to handle abstractions better, there would be less problem. But, we'll get to that. You see, the interloper is mathematics of the elite whereas it ought to be the purveyor of truth, accessible to all.


Elsewhere, there was a mention of Facebook being a metaphor. Actually, it represents this problem from several sides.

The reality is that this overlay being built (of which current technology is only providing a glimmer) upon our casual reality will continue to be and to grow in size and complexity. The elite/serf split has been made worse by that which was done early. It's imperative that we understand better the dynamics so as to make less certain the dire straits for the masses that are inevitable given current thrusts.


08/22/2012 --

Modified: 08/22/2012

Monday, August 6, 2012

Effectiveness, again

Effectiveness? There can be much discussion about this, but, essentially, let's just say that some 'thing' is effective if it works, or produces results, as expected (expectation management might be of interest here, see below -- prior posts in the category). If we take STEM (looking backward), we see a lot of evidence of effective approaches that are remarkably productive. Examples abound: the computer (even if it's within a PDA) being used to read this post, the system behind the blog's use of posts, the communications scheme (WWW, et al), and much more.

Example of effectiveness
Financial Times (08/07/2012)
We have all sorts of others. Take NASA's Curiosity Rover (and its older cousins) or SpaceX, for instance. We could spend hours discussing what is behind that. In this case, mathematics and computation stand out, though a lot of engineering has been involved, to boot (most of which apply advanced computational methods). There's medicine and its marvels to look at.

Business? Well, we can find glimmers here and there, if we discount the idiocy of some finance. But, the mis-application of advanced techniques have put us into deep dodo (see below, as this theme with continue to be described and analyzed so as to propose solutions).

In any case, from a foundation'l view, one has to ask about the effectiveness of mathematics. This has been done; the field is quasi-empiricism in mathematics. Its inception coincides with the wonders related to the establishment of the standard model in support of our understanding of physics, which work used several types of mathematical advances of the past 200 years. The list of accomplishments from applying the knowledge of physics, effectively, is huge (and, is generally known).


Publications related to the 'effectiveness' theme have been many, but the paper (1960) by Wigner is seminal: The Unreasonable Effectiveness of Mathematics in the Natural Sciences. Essentially, Eugene suggested that we've been lucky (paraphrase) and that we do not know why this success has come about.

Aside: Listening to some of the practitioners (okay, some are very smart), one can perceive lots of hubris (ah, we'll continue with this, to boot).

Of course, debate continues. The work of Norman Wildberger has some application here (though, this is my put, not the Professor's). He's saying that the current views, in some cases, are too complicated resulting in weaknesses that are insurmountable without some underlying change. Yet, people have been doing this difficult work for several generations now.

Aside: Of course, the populace has bifurcated into the numerant class (in this context, able to follow mathematics - or, paraphrasing von Neumann, able to get used to mathematics) and not. Yet, the importance of mathematics might suggest that it be amenable to all. We'll get back to that.


So, an operational question might be: why do we have to understand why the things work, if we can show that they work suitably for repetition and duplication? In other words, the magician's class will be with us ever more (think Poe). By the way, managers are not of that class, rather they expect 'magical' results from the work of others.


Now, before going further, let me relate one idea from seeing some videos (see below) by the Prof. He touts rational trigonometry. Okay, I've watched a couple of these (selected by the title's suggestiveness). For one thing, I'm wondering how we might compute with the 'rational' framework, somewhat like this. When doing solution derivation, transforms are the norm. I'll be looking to see how the Prof handles Fourier's work. But, let's take fuzzy logic. As with most abstract approaches, one has to take data related to real (as in being) things, fuzzify these, compute, then de-fuzzify to get results back into the proper domain (talking context, etc.). Now, with the rational approach, it may be better to do long chains of computation after converting to the rational representation. Naturally, one would have to deal with approximations, and such. Yet, during the computation, several troublesome problems would go away since the reals (number) would not be seen. Too, if one is doing things that require limits, or decisions about bounds, the rational approach ought to do this natively. Anyway, it's an interest of mine.


Here are three links to the Prof's work:

I started the foundations series at MF87 and got hooked. Then, I went back to the beginning. Later, I started the WildTrig series and started to bounce around. I will continue in that vein through the videos. He has additional lecture series: Linear Algebra, Algebraic Topology, Hyperbolic Geometry, and more.

The method that I'm going to follow is to watch and summarize (if motivated). In some cases, I might try to write about the topic before looking at his work just to see how close I can come to his work. Why? I've been at this (applying mathematics through computational systems) for decades (I'm actually quite a bit older than the Prof -- I've seen computation progress from the inside - like our buddy Al). The areas of focus included geometry in modern, advanced types of modelling that are stringent in their demands (that is, not the loosy-goosy approaches we find with gaming).

It's nice to see his focus on Geometry (Euclidean, okay?) which was thought of as a forgotten step-child for a very long time as people ran after topology and differential geometry (ought I say? abstractions following abstractions -- what's the term? abstract nonsense -- the Prof is right, getting a long way from 'reality' - as people try to either be like or to outdo our old friend, Albert). However, in practice, any engineering of products that has mass within the space of our planet (and beyond) makes heavy use of Geometry and Trigonometry.


In any case, there were splits between types of duties and responsibilities along a line that could have been troublesome but was never too much so (except, surprisingly, engineers unionized, in many cases). Good engineers, who could handle the mathematics, generally did not become managers (or politicians, for that matter). The really good ones got themselves into the golden handcuffs mode where they were essential, yet overlooked. Well, playing the political games was for the lower-echelon mind (anyway). And, the intellectual workers did magical things (as far as the business mind could see) and had to do so on demand (a source of irritation, to say the least). In fact, the smart companies coddled their good people (perhaps, some of the newer companies know about how to do this better).

But, as you see the nerd revenge coming about in computing (Google, FB, ...), there are still problems (take it from, me, a user, but one who knows what's beneath the skirt even if I'm not allowed to use my eyeballs). Hey, you young guys, you're becoming troublesome for other reasons (we'll get there).


So, the Prof's re-do of the foundations might play a role here. No doubt there will still be mental gymnastics required, yet, the underlying basis would be amenable to all (philosophical notion here that has merit in extending the standard model's effectiveness). I saw a lot of contrived efforts at trying to pull out something understandable in a general sense from dense stuff expressed in all sorts of technical media. One goal of this type of thing was to get expectations and progress measurements synchronized between disparate (by their very nature) views.

The Prof mentioned, in MF87, that a lot of the perceived weakness in 'pure' mathematics can be traced to the growing flim-flam (didn't want to use gobbledygook) which comes from an improper basis. In fact, he used smell and sense, many times. Of course, he's not talking naive commonsense, so much. But, intellect (thanks Prof Gardner) is more than we allow. And, we can learn from the Greeks even though the modern view likes to "dis" (take hip-hop, for example) those who went before (whereas, within the fabric - ah, yes, the book and Nova series -- they are there)

Myself, I've been pondering several things, such as, [how] we could get to a modern peripatetic method, even if it would be based upon a flat-world extension (thanks, Hilbert) that has more 'reality' than we appreciate (the earth may be round; the world is definitely flat)? Then, the whole notion of truth engines would use an improved model of human beingness, and the insights related to which would need some type of technical expression. The Prof might be on to something here. He did seem to stress intuition, etc.


08/08/2012 -- The Prof mentions several times his antipathy to thoughts about the infinite and how we may have conquered it (ala Cantor's work, et al). The Prof's approach, as he shows in several videos, can go toward the very large, and small, yet it would not be a convergence (another concept that he does not like). What would be a good term to use?

08/07/2012 -- Since writing this note, I have watched a few more videos. The Prof is thorough in his efforts. Found my first dissonance: MF77, on objects or expressions. Ah! I'll have to dig deeper to see his thoughts on computers. He doesn't like them to be used at crutches. Is he thinking that, perhaps, the computational system might be the realization of some mathematical truth (I can explain -- his focus on code sort of says not -- truth engines are the key, hence the blog). What all this has come to, that is the exposure the past few days to some original though, is that I now know that foundation'l work is being done. I need to pay more attention. Also, my area of work needs to be in meta-mathematics, in part.

Modified: 08/08/2012

Saturday, August 4, 2012

Weakness in modern pure mathematics

Several posts, of late, have talked about the need for a re-look at technical issues. This will continue in a, hopefully, coherent fashion with measurable effectiveness (being cheeky).
All along, there has been a suggestion here that things are awry because of foundation'al issues. And, this is because of more than the concerns about quasi-empiricism. These issues go way back.


There is a chain, folks, that can explain (partly) the source of the problem. Dawkins, in fact, has used the argument. In the sense of science, let's put it this way (paraphrase) -- the biologist talks to the physicist (and chemist), the physicist talks to the mathematician, and to whom talks the mathematician? God. The adage is age-old but worthy of attention.

Aside: What goes along with this is that only some of a cohort set are the ones worthy (of value) enough to push back the horizon. This was fine when things were 'pure' and simple-minded (yes, people, the classic joke of the unaware thinker -- do we really need that?). But, given those considered worthy (yet, they have feet of clay as do the rest) a computer and the web and watch out (yes, people, the crap that we see now is of this ilk -- how did reasoning, smart folks allow such a bad state to develop? - Harvard isn't off the hook - yet, as said before, the theory of multiple intelligence comes from that institution -- so, numeracy (as in, applications of mathematics -- all types) is not the epitome (STEM, a misdirection, of sorts) by any means -- well-rounded-ness? ever heard of that?).


Aside: In several posts, there may be use of undecidability and quasi-empirical in the context of computational issues of note. The discussion of this topic will continue under the context of Computability in all of the related blogs.


So, the quakings that we see (covered in postings) do deal with mathematics, with those who do it, and with its applications. We will continue to go on about that as the growing computational frameworks are troublesome in very many ways. As I've said, there is no timeline that I'm adhering to (as of yet). We'll follow things to wherever facts and features lead.

In that vein, I just ran across a lecture that is saying similar things, essentially. I'm going to be listening to this and related discussions. The lecture is titled Logical weakness in modern pure mathematics (see on youtube) and is given by Prof. Wildberger at UNSW (New South Wales). What caught my eye was that on the first few frames of Part II we find this list of problems:
  • Inconsistent rigour
  • Problematic definitions
  • Reliance on 'axioms'
  • Computationally weak
  • Impoverished examples
Then, the Prof shows a partitioning into the troublesome areas and those not so much. The list includes: calculus/analysis, set theory/logic, geometry/topology, probability/measure theory and parts of algebraic geometry.


Logical weakness in modern pure mathematics
Aside: The lecture topic has 'pure' in it which study does not generally include applications (that's the whole point, removal via abstraction from anything of consequence). But, the Prof mentions  early on that we need to have examples in order to provide a basis that is more solid (paraphrase - pun intended, to boot). As well, though, the argument here is that those who apply are resting themselves, many times, on what the theorists tell them. In a sense, we cannot have pure without its mappings to usage (I know, philosophically arguable) if we are to overcome the weaknesses alluded to by the Prof. Being cheeky, I feel that a peripatetic approach is required, albeit virtually founded. This type of thing is very much in the interest of the techies to pursue (rather than bigger pockets).


This post is a marker as I expect things to get interesting as I follow his arguments. It's real nice to run across this. Why? Many times the problems are as I've mentioned here. Teach a manager a little math and watch out. They'll run amok. Lots of what we see on the web (yes, Google, you, too, mathematicians as you claim to be) is this type of thing. I marvel at how far computational methods have been pushed despite the fact that they have the slimmest of supporting theory.

You question this? Ah, open up your code and let us see.

In another realm, we have program trading. This is the highest type of idiocy possible, even if those propagating the madness have their Doctorates (sheesh, you guys/gals, ever consider foundations?).


06/25/2015 -- ACM Communications had an article (Created Computed Universe) that suggest that our computional prowess ought to lead to agnosticism rather than to anything else. Of course, my initial remark: So many modern minds conjure and contort in order to introduce what is not much different than what some knew many millennia ago in the desert.

08/08/02012 -- On effectiveness. It's there, and we take advantage of it (even if we have no clue as to its origin -- and let hubris reign more than humility). NASA is a very good example with Curiosity Rover. But, the recent Russian failure to launch a satellite says that things can go awry at any time despite good efforts. NASA has had its failures, too. The main issues are at the boundaries (to be discussed) and when extrapolations go way beyond what the basis will support (many examples, we'll get there). But, as the Prof showed with his rational model, we can extend, indefinitely, twixt two rationals. So, the boundary (reference) above has, at least, a dual meaning.

Aside: The Prof mentions several times his antipathy to thoughts about the infinite and how we may have conquered it (ala Cantor's work, et al). The Prof's approach can go toward the very large, and small, yet it would not be a convergence (another concept that he doesn't like). What would be a good term to use?

08/07/2012 -- MF77 deals with object oriented vs expression oriented. The former has some relations with category theory, etc. The latter one might think of as 'code' related. We can think of demonstrating something computationally rather than jawboning (it's good to hear that this is what mathematicians do). The Prof seems to be pushing computer orientation which raise other issues, but we'll keep listening. If he can keep computers from becoming an even bigger mystery, then all would be well. Otherwise, we're talking the development of a monolith, in a certain sense, that would become troublesome, indeed.

08/06/2012 -- We'll jump to the meat and go through the sequence on Rational Trigonometry. His book and papers are available at the UNSW site.

08/04/2012 -- MF3, at 7:38, shows why 'New Math' failed.

08/04/2012 -- Just went through MF1 and MF2. In essence, the Prof is establishing a foundations of mathematics that is amenable to anyone. The analog? Think about the intelligence tests that are supposedly without any cultural bias. The Prof's approach ought to end with a mathematics that will be accessible to any person who wants to make the effort to learn it, not just the elite class. The consequence: we may find discoveries far beyond our imagination since cultural biases are the most limiting especially for those who are of the advanced types whose talent causes them to be forced into (or reinforced toward) the highest (purported, remember?) cultural status.

08/04/2012 -- The first video of the series (count as of now, 101). As of today, the last video.

At 22:56 of MF87
08/04/2012 -- Will do some notes here as we go along. On finishing up MF87, it was nice to hear him say that the youngsters can smell when things are awry. They may not be able to put their finger on what is the problem, but that there is a problem is, somewhat, obvious to them. I'm not going to follow the lectures sequentially but will bounce around by interest and applicability to my work. Notes will be here; there will be occasional summaries posted with comments about how it all applies to the topic of this blog (and its associates).

Aside: von Neumann said that we can't understand mathematics but get used to it. So, that's by use, repetition, etc. Is the Prof arguing the Hilbert side?

Not being critical at this early stage, as I want to hear what he has to say in its entirety. See 22:56 (image), which is right on!

Modified: 06/25/2015

Thursday, August 2, 2012


Five years ago, the intuition of the blogger was good, in that most advanced work seemed to honor the denseness of semidecidable systems. And, that is being kind, as most argued that we did not have any need for concern for decidability for ‘finite’ systems. Too, we knew that the standard model had been successfully tweaked over a 100 years or so. Having this success made us lean toward hubris.

But, in actuality, if you look at the world (today or anytime), there are many more “unknown unknowns” (recent look via 7√≥ops7) than our old friend Rumsfeld would admit to. Of course, the craziness that occurred late in the year of 2001 had a real and visceral effect; it was too easy prior to that for some to have an unreasonable comfort about our being untouchable in the area bounded by the two large bodies of water. We are still trying to unwind out of effects from those, and later, events.

At the same time, though, some of those who could [have] exploited the observed success of certain methods by growing the ‘chimera’ way beyond its rational basis. That is, the methodical approaches from physics (at many levels) were brought out to build the apparati that run the markets. If you must, sleight-of-hand keeps the take of some as a perpetual sequence (right there is part reason for the bad smells) while obfuscating the reality by trickery and faulty mathematical arguments.


From the first post, the blogger has been suggesting that various mis-applications are the norm and that these are enabled by the growing computational frameworks. You see, the basis is essentially undecidable (which we’ll show); our progress has largely been predicated upon the luck of smart people (we’re in deep dodo, people); yet, we have the charlatans arguing that things are (would be) great if we continue to follow them on the path to perdition.  


If we can identify the alleged misdeeds, ought there ensue an orderly discussion about what to do? If only. As it then becomes a conflict between truth and power (and we know, just look at the current election, power has its basis in money – truth, on the other hand, is not of money – this, too, we’ll show).

Not that we expect a fairy-tale ending, but one can still think (and cannot be faulted for thinking) that realizations would be required in order to bring some clarity into situations. But, with Corporations seen as people (in what reality is such a thing conceivable?) the whole environment has changed. The blogger will not be the cynic who thinks that resolution is not possible, but we have our work cut out for us.

The initial focus on how abstraction's appeal has led us astray was correct and will continue. The topic needs to be further clarified.


Now, in regard to the economic mess, before the time of the start of the blog, some of the rumblings of structures quaking were already heard by some. Many were awakening to the fact that we were going to experience some type of downturn. What was unexpected was the non-stop exposure of stupidity (idiotic notions being widely disseminated by brilliant minds) almost on a daily basis, the deep depth of the crap that was put there by our leaders, and the length of time in which there has been no resolution of the underlying issues (perps still walk).

WTF! Again, WTF!

All sorts of descriptions are possible, such as the old Rip one waking up to this fact, after a couple of decades or so: the best and brightest were given that label precisely for the fact that they could fill the pockets of the fat cats -- to be defined, again – and themselves (ah, FB is more than a metaphor; it’s a poster boy, to boot).


08/08/2012 -- We need the effectiveness (unreasonable or not) to be cast in a new light. Curiosity Rover, et al, is an example of how things can go. 

08/05/2012 -- Added pointer to updated look at unknowns via 7'oops7. Some editing (periodicals have teams plus an iterative process, yet you'll see typos now and then (more frequently nowadays with computer assist being the vogue) -- several lessons in that, perhaps). 

Note: See comment on using another approach, below. Need to tweak it. All the style information that was carried over is troublesome. Oh well. Of course, I know how to filter out and get minimal HTML, but why is it seen as smart (or progressive) to bring in this type of crappy assistance (not any better than it was a few years ago)? 

08/04/2012 -- Weakness? Yes, indeed. 

08/04/2012 -- This was the first post that was developed in a modern editor and then copied to the blog editor. We'll try that for awhile. Having the more intuitive interface ought to help expositions formulate. 

Modified: 08/07/2012