Effectiveness? There can be much discussion about this, but, essentially, let's just say that some 'thing' is effective if it works, or produces results, as expected (expectation management might be of interest here, see below 
prior posts in the category). If we take STEM (looking backward), we see a lot of evidence of effective approaches that are remarkably productive. Examples abound: the computer (even if it's within a PDA) being used to read this post, the system behind the blog's use of posts, the communications scheme (WWW, et al), and much more.

Example of effectiveness
Financial Times (08/07/2012) 
We have all sorts of others. Take NASA's
Curiosity Rover (and its older cousins) or
SpaceX, for instance. We could spend hours discussing what is behind that. In this case, mathematics and computation stand out, though a lot of engineering has been involved, to boot (most of which apply advanced computational methods). There's medicine and its marvels to look at.
Business? Well, we can find glimmers here and there, if we discount the idiocy of some finance. But, the misapplication of
advanced techniques have put us into deep dodo (see below, as this theme with continue to be described and analyzed so as to propose solutions).
In any case, from a foundation'l view, one has to ask about the effectiveness of mathematics. This has been done; the field is
quasiempiricism in mathematics. Its inception coincides with the wonders related to the establishment of the
standard model in support of our understanding of physics, which work used several types of mathematical advances of the past 200 years. The list of accomplishments from applying the knowledge of physics, effectively, is huge (and, is generally known).

Publications related to the 'effectiveness' theme have been many, but the paper (1960) by Wigner is seminal:
The Unreasonable Effectiveness of Mathematics in the Natural Sciences. Essentially, Eugene suggested that we've been lucky (paraphrase) and that we do not know why this success has come about.
Aside: Listening to some of the practitioners (okay, some are very smart), one can perceive lots of hubris (ah, we'll continue with this, to boot).
Of course, debate continues. The work of
Norman Wildberger has some application here (though, this is my put, not the Professor's). He's saying that the current views, in some cases, are too complicated resulting in weaknesses that are insurmountable without some underlying change. Yet, people have been doing this difficult work for several generations now.
Aside: Of course, the populace has bifurcated into the numerant class (in this context, able to follow mathematics  or, paraphrasing von Neumann, able to get used to mathematics) and not. Yet, the importance of mathematics might suggest that it be amenable to all. We'll get back to that.

So, an
operational question might be: why do we have to understand why the things work, if we can show that they work suitably for repetition and duplication? In other words, the magician's class will be with us ever more (think Poe). By the way, managers are not of that class, rather they expect 'magical' results from the work of others.

Now, before going further, let me relate one idea from seeing some videos (see below) by the Prof. He touts rational trigonometry. Okay, I've watched a couple of these (selected by the title's suggestiveness). For one thing, I'm wondering how we might compute with the 'rational' framework, somewhat like this. When doing solution derivation,
transforms are the norm. I'll be looking to see how the Prof handles Fourier's work. But, let's take fuzzy logic. As with most abstract approaches, one has to take data related to real (as in being) things, fuzzify these, compute, then
defuzzify to get results back into the proper domain (talking context, etc.). Now, with the rational approach, it may be better to do long chains of computation after converting to the rational representation. Naturally, one would have to deal with approximations, and such. Yet, during the computation, several troublesome problems would go away since the reals (number) would not be seen. Too, if one is doing things that require limits, or decisions about bounds, the rational approach ought to do this natively. Anyway, it's an interest of mine.

Here are three links to the Prof's work:
I started the foundations series at MF87 and got hooked. Then, I went back to the beginning. Later, I started the WildTrig series and started to bounce around. I will continue in that vein through the videos. He has additional lecture series: Linear Algebra, Algebraic Topology, Hyperbolic Geometry, and more.
The method that I'm going to follow is to watch and summarize (if motivated). In some cases, I might try to write about the topic before looking at his work just to see how close I can come to his work. Why? I've been at this (applying mathematics through computational systems) for decades (I'm actually quite a bit older than the Prof  I've seen computation progress from the inside  like our buddy Al). The areas of focus included geometry in modern, advanced types of modelling that are stringent in their demands (that is, not the loosygoosy approaches we find with gaming).
It's nice to see his focus on Geometry (Euclidean, okay?) which was thought of as a forgotten stepchild for a very long time as people ran after topology and differential geometry (ought I say? abstractions following abstractions  what's the term?
abstract nonsense  the Prof is right, getting a long way from 'reality'  as people try to either be like or to outdo our old friend,
Albert). However, in practice, any engineering of products that has mass within the space of our planet (and beyond) makes heavy use of Geometry and Trigonometry.

In any case, there were splits between types of duties and responsibilities along a line that could have been troublesome but was never too much so (except, surprisingly, engineers unionized, in many cases). Good engineers, who could handle the mathematics, generally did not become managers (or politicians, for that matter). The really good ones got themselves into the golden handcuffs mode where they were essential, yet overlooked. Well, playing the political games was for the lowerechelon mind (anyway). And, the intellectual workers did magical things (as far as the business mind could see) and had to do so on demand (a source of irritation, to say the least). In fact, the smart companies coddled their good people (perhaps, some of the newer companies know about how to do this better).
But, as you see the nerd revenge coming about in computing (Google, FB, ...), there are still problems (take it from, me, a user, but one who knows what's beneath the skirt even if I'm not allowed to use my eyeballs). Hey, you young guys, you're becoming troublesome for other reasons (we'll get there).

So, the Prof's redo of the foundations might play a role here. No doubt there will still be mental gymnastics required, yet, the underlying basis would be amenable to all (philosophical notion here that has merit in extending the standard model's effectiveness). I saw a lot of contrived efforts at trying to pull out something understandable in a general sense from dense stuff expressed in all sorts of technical media. One goal of this type of thing was to get expectations and
progress measurements synchronized between disparate (by their very nature) views.
The Prof mentioned, in MF87, that a lot of the perceived weakness in 'pure' mathematics can be traced to the growing flimflam (didn't want to use gobbledygook) which comes from an improper basis. In fact, he used smell and sense, many times. Of course, he's not talking naive commonsense, so much. But, intellect (thanks Prof Gardner) is more than we allow. And, we can learn from the Greeks even though the modern view likes to "dis" (take hiphop, for example) those who went before (whereas, within the fabric  ah, yes, the book and Nova series  they are there)
Myself, I've been pondering
several things, such as, [how] we could get to a modern
peripatetic method, even if it would be based upon a flatworld extension (thanks, Hilbert) that has more 'reality' than we appreciate (the earth may be round; the world is definitely flat)? Then, the whole notion of truth engines would use an improved model of human beingness, and the insights related to which would need some type of technical expression. The Prof might be on to something here. He did seem to stress intuition, etc.
Remarks:
08/08/2012  The Prof mentions several times his antipathy to thoughts about the infinite and how we may have conquered it (ala Cantor's work, et al). The Prof's approach, as he shows in several videos, can go toward the very large, and small, yet it would not be a convergence (another concept that he does not like). What would be a good term to use?
08/07/2012  Since writing this note, I have watched a few more videos. The Prof is thorough in his efforts. Found my first dissonance: MF77, on objects or expressions. Ah! I'll have to dig deeper to see his thoughts on computers. He doesn't like them to be used at crutches. Is he thinking that, perhaps, the computational system might be the realization of some mathematical truth (I can explain  his focus on code sort of says not  truth engines are the key, hence the blog). What all this has come to, that is the exposure the past few days to some original though, is that I now know that foundation'l work is being done. I need to pay more attention. Also, my area of work needs to be in metamathematics, in part.
Modified: 08/08/2012