Tuesday, January 14, 2020

Boeing and its roles

This blog (July 2007) and its sister (7'oops7 - August 2007) started thirteen years ago. The motive of this blog was truth engineering which developed under the auspices of Boeing and several projects. These were principally related to commercial aircraft but included others.

How did truth engineering come about? I was involved in the AI that had its infamous winter. After the winter began, the context and tools changed a little, but we continued for over a decade with knowledge based engineering (KBE). Not unlike what we have seen for the couple of decades of the www (IP, internet), advances in computing have inspired lots and lots of progress. And, because of this, we have a landscape of ability without any overview that means much.

In fact, those pushing at the edge are to where they have to deal with black boxes of unknown value and possible failure points. Nothing new. We knew of this type of thing, and the cyclic nature of the energies, decades ago. It's just that things are much more complicated now.

In fact, I remember my first drafting of thoughts about truth engineering in the context of KBE and the new thing, internet, that was taking things in new directions. I puzzled on how to express the issues of ideas that were on the verge of becoming problematic. As one can see with the blog, I merely started in 2007 with thoughts that were to cohere at some point. When was this to happen?

Well all along, I noticed little crackings of the underpinnings. And, we have a lot to discuss about that. Too, one saw major shifts as products moved people in directions that were not healthy. I could see this? Why do you think that I used 'silly valley' (billionaires with no real sense of the nature of things - yes, categorically so)? In any case, of late, the acceleration to the path to perdition has become more obvious, generally.

So, it's time to resurrect this approach, again, after a sidetrack in Quora. There, I answered questions and had blogs. They moved the latter to spaces which I will used, however I need to organize years of work, first.

One huge factor with this rework is the 737 Max. Having seen what workers wrote, I want to explain this to Boeing management. After all, engineers are the core of the business. If engineering cannot be ethical and do things right, then, where will leadership arise? I might, true leadership? That is, smart people lead themselves, even when they join groups. We don't know much about that dynamic, but Carl G. Jung's thoughts apply.

So, we're not picking on Boeing. KBE did phenomenal work, at the time. There were even efforts a machine learning. So, the current hype comes about from better tools and toys. The understanding has diminished. But not, truth engineering is right down this alley.

Oh yes, this is cursory. But, let's look at one thing that is important. At the guts of machine learning (say, CNN) are tricks of mathematics that might be nuanced a little different now, but we knew them decades ago. Like I said, the toys and tools are better. But those came from hard work by lots of folks.

The magic of computing gets abstraction's appeal to be too much since we really have no way to dampen that energy. On the other hand, we want to encourage innovation. Ah, what might be a key thing? Ethical economics. And, I am not referring to macro manipulations that are the love of the political crowd. No, I am talking a micro and operational approach that I have seen no one, but myself, take. If I overlooked something, I'll admit it. But show me.

In the meantime, let's look at crapularity (necessary hermeneutics) and at the fact that we've created a deep set of unhealthy conditions. The real tragedy is that all of this was done in the name of science (yet, it has been accepted due to the sexiness - well, we'll get to anthropology, too).

'Enuf for one day.

Remarks: Modified: 01/14/2020

01/14/2020 --




Friday, January 3, 2020

Summary, 2019

This year, we had the 737 Max in the spotlight. So many themes relate to that. We'll be back.

List of 2019 posts.

Remarks: Modified: 01/03/2020

01/03/2020 --

Thursday, August 8, 2019

Nash equilibria and hardness

Back when there was more of a logic focus, it seemed easier to argue for limitations since one could use terms like undecidable-ness. From my experience in KBE, we found enough events that puzzled to talk 'vertigo' (yes, lost in the fog). From an operational sense, we used a smart 'man-in-the-loop' to cut ourselves out of intractable situations (got 'er done).

Then, Bayes came along. He is not recent; it is just that computation finally caught up. I used him for a paper five decades ago and heard tsk-tsk. But, then, they laughed at Leontief, then, too. There were other tricks that have evolved. Systems became (much) more powerful. Lots of cheap energy was around and about. Too, seemingly, notions of constraint were left on the side of the road.

Data existence ballooned, though there are many issues left to discuss. That allowed a whole industry to grow (data science). AI/ML/DL took advantage of the situation. Games were won that had not been within grasp before.
Linguistics advanced, too. Computer-generated text rose like the sun. Yet, to this old-guy's eyes, we're dealing with a situation not unlike the million monkeys and a novel. Uncountable numbers of them creating?

The stuff does not resonate. But, I'm talking talents not recognized, yet. So, that's not even on the table.

To me, numeric abilities allowed a peanut-buttering without this being obvious. They force convergence. It looked, to hear the hype, that we're now all in a rosy time (colored glasses, for sure). But, again, the old guy will point to 12 years ago when the risk experts were saying, piece of cake - there will never be another downturn (on the eve of the very thing).

What gives? We need to discuss. So, Aviad shows hardness. There are regular discussions at Stanford and elsewhere.
To the old guy's mind, it's like a re-opening of what was known. Or taking it back off of the shelf. We'll have to rephrase some stuff. But, there are limits. And, there are things that AI needs to know dealing with structure and more. It's nice to see Nash re-evaluated.

Used for a Quora question: With Nash equilibria being shown as intractable, ought we consider that there are numeric limits to AI that are being ignored?

Remarks: Modified: 08/08/2019

08/08/2019 --

Thursday, May 9, 2019

Test by tragedy

Unfortunately, we get this, especially when things get complicated.

Ethics might be a way to bring decisions related to two recent events back to the table.

Remarks: Modified: 08/08/2019

08/08/2019 -- Let's just say this, an overly-complicated procedure came out with an unstable design that would have been seen in the older days. However, the process was changed and cut too far. The issue got by and was given to the compute folk to handle. Tsk.  

Sunday, December 30, 2018

Summary, 2018


Prior years:  20112012201320142015, 2016, 2017 (missing), 2018.

Remarks: Modified: 12/30/2018

12/30/2018 --

Sunday, November 18, 2018

Update

So, we're past what is called the mid-term (something like that) election. So, the blues took the House from the reds. That ought to dampen some of the whining. We will see. The Fed has started to quit the flaying of the savers. Some savings rates are up to 2% and more.

We never whined here. No, it has always been righteous indignation. Like the prophet of old, we could not see much but thing going crazy.

However, now is the time for reasonable folks to step forward, and we intend to do so.

Of late, our creative juices have been spent on two avenues: Thomas Gardner Society, Inc. site and blog; Quora, the indescribable, so to speak.

As we look at the influence of social media on society, we have to get back into that realm's need for truth engineering. We intend to use wind-speed social media as an anchor.

Remarks:  Modified: 11/18/2018

11/18/2018 --

Saturday, October 13, 2018

Overview of AI

The gist of this post is a recap, somewhat, that is being done briefly. I have been working in Quora, for the most part, but intend to get back to this blog.

The recent CACM of the ACM published an overview of AI by the head of CompSci at UCLA. His view resonated real strongly with me. So, I want to provide a quick look at some of the key items.

First, the article: Human-level intelligence or animal-like abilities?  Author: Adnan Darwiche.

Adnan starts by summarizing some of the recent successes (if you can call them that) that got so much attention in the press. The computer (AI driven) winning in Go. The computer learning how to play a game and beating expert humans. The list goes on. And, the basis is the neural net approach that is not new. Marvin Minsky wrote of these one-half a century ago.

But, too, people got upset. Some are believing in the coming singularity. I say, worry about the current crapularity. Oh, the robots are coming. Just like there was a warning 200 years ago about the invasion of the Brits who wanted to spank the free Americans.

Adnan mentions some of the mainstream areas of AI that have been ignored in the rush after the numeric paradise that is being given us by machine learning. He mentions "knowledge representation, symbolic reasoning, and planning" in short. Adnan was there when the AI winter came about from the mania of the '80s. He was student.

I would tell him that knowledge based engineering came out of that thrust and went fine for decades doing great things. I can (and will) point to writings about this. However, let's continue with Adnan's look from his lofty position.

He reminds us that AI had a reasoning approach, from the beginning. The recent work? No, it's numeric processing, pure and simple. Now, he mentions that particular neural nets (NNs, of which there are many variety) are doing some type of representation of a model. Given that, we can understand these supposed black-boxes with the proper approach. He uses two concepts: model-based and function-based. The former was (is) a huge part of AI, in general. You know, the latter was seen in engineering, too.

So, what we are talking is that the tried-and-true of science and engineering somehow got lost in the rush after the search for the magic function (oh yes, there ain't none). The main argument for the NNs is the advancements that we see in "sophisticated statistical and optimization techniques" that do seem to have amazing results.

But, as Adnan said, it seems to be that we're chasing what animals already do well. And, not doing as well as they. The article provides sufficient examples. But, consider this: no animal can make a computer.

Too, there seems to be a huge divide between the true believers here (chasing after money, power, fame, what have you) and those more scientifically oriented. Besides, the whole of the atmosphere has been polluted with out-of-control hype. Adnan uses 'bullied by success' (let's talk some type of innoculation, shall we?).

What is real here? Well, Adnan has the answer. He says that we're dealing with 'function engineering' where one goal is to approximate cognitive functions. There are more. In fact, Adnan's position is strongly influenced by Judea Pearl who recently wrote on the subject (The Book of Why: The New Science of Cause and Effect).

To me, Adnan is talking the old middle-out technique that has been the reality of complicated systems all along. We could think of the NNs as supporting bottom up. But, be aware that there are more areas of technology awaiting attention. The top-down would be somewhat assisted by the traditional approaches. Albeit, we are really talking the human expert rising to the challenge. Per usual.

However, NNs might just be universal as some have claimed from the beginning (not Marvin as he was a critic to his death) in some simulation sense. Guess what? Physics and reality might be closely mimicked by our computationalist's approaches, however being is there irregardless. The 'crap' from extensive computer work (and there are whole bunches to denote) grows. Unfortunately, that whole bit of accumulating residue gets no (or little) attention.

Throughout the paper, Adnan quotes anonymous sources. There is good reason to not get into the cross-hairs at this time. What is really at stake? Well, truth engineering has been dealing with these issues from its inception (my work of the past two decades). So, our pace has (might have) been that of a snail, yet it's tied into other than gaming. Oh yes, we have a whole lot of work to do in reviewing the just past decade and one-half.

Remarks: Modified: 10/16/2018

10/16/2018 -- As Adnan suggested, since there is little scientific thinking going on, I see that we have raging imagination causing the 'running amok' atmosphere. And, we forget the past. This little study has major importance: Carnegie Mellon is Saving Old Software from Oblivion. Those who know have been concerned that experiments (heavily reliant upon computers) cannot be repeated due to technology due to several factors. One is the flash push (wow factor) versus sustainable science.