Sunday, March 24, 2024

HyperPhysics

Our usual first stop in research is Google and then the Wikipedia link that they provide if no other link shows up. Wikipedia then branches off. Or, scanning the information allows formation of a better prompt. 

Prompt? Yes, there's a new field with a title of "prompt engineering" which pertains to controlling output from a GenAI process (or session). At first, it seemed rather facetious. However, as the concept took hold, there are studies being done with using this method some of which are technical in nature. 

On the other hand, "prompt' from outside will not overcome deficiencies latent in the learning process. So, that will become more obvious as these studies go on. Essentially, "crap" cannot be trained out. Preprocess, in the old sense of "a priori" is a necessity (was and will be, as well). 

But, we're not tracking down that path. We're looking content and configuration which was our favorite topic for a while and has not gone away. 

So, we were searching on Navier-Stokes and solenoid. The first was prompted (pun) by a paper that discussed Einstein's PhD thesis. It's at a paid site, so we only saw glimmers of the topic and will look further. But, the other prompt was that Einstein and Infeld's look at the evolution of physics touches upon the device in their discussion of the "rise of the mechanical' view. The interest has to do with the fact that this work is 100 years old, plus or minus. As 1905 was the year that people became aware of Einstein's thinking. 

HyperPhysics
Too, AIn't and such plus things relating to ourselves will be understood better with an updated grasp of what Einstein was trying to show. Movies, such as Oppenheimer, bring out some of the notions and concepts and problems related to technology and its advances. 

So, to today's theme. One of the links from Wikipedia went to a site at Georgia State which is using an older format to demonstrate ideas with interactive demonstrations via Java and Javascript. To me, that's a great find. Have seen it before. I have pointed to my favorite site which dealt with findings in the realm of mathematical physics and which still is there in a 1995 format. There has been an update which references back to the classical site. 

Like we're doing. Here is the site and its graphic. The site was set up for teachers of High School Physics and is available as an app. 

Einstein
experiment
At the same time, they have a HyperMath site that is very good.

Staying with the theme, there is a new site that represents changes over the past decade with respect to what I call configuration. 

HyperPhysics was featured in PhysicsWorld which offers current information. Their site has a more modern look and configuration. There isn't the same level of interaction. But, we're considering look and feel. Consider that the topic would not be comparison directly but concerns for managing information going forward. 

As a side note, Geertruida de Haas-Lorentz provided an experiment to a French museum. It had been done by her husband and Einstein in 1915 and showed a connection between magnetism and "angular-momentum of electrons" using a simple device.    

Remarks: Modified: 03/24/2024

03/24/2024 -- Tied to to the TGS, Inc. focus on technology: Geertruid de Hass-Lorentz


Thursday, March 21, 2024

Albert and his boys

As in, Albert Einstein and the guys (and gals) of Physics. His ideas were central to the development of truth engineering. The trouble is, what are his ideas? He, himself, complained that once the mathematicians got hold of his model, he didn't understand it. Too, he covered a lot of bases over his time. So, in truth, we will have to track through all of that, over time. 

And space? Look at GenAI (which is really is AIn't as I have mentioned many times since 2021) and how it represents unwarranted "search" through hyper/multi-dimensional spaces. We will look at that closely. This recent work is dabbling with heterogenous stuff and assuming that transforms will bring in homogeneity. But, the work of Einstein dealt with things that were more amenable to being handled in the "canonical" fashion that homogeneity brings. 

Note: That last paragraph points to the essence of the main issue of the cloud which we can (and will) go into in detail (over time and space). However, when we look at the foundational issues being addressed by Einstein, we will see a close stepping forth (dance like) of mathematics and the operational so as to become problematic to the extreme. GenAI is a mere symptom of a huge problem. Time to start to grapple with that. 

All of this plus more is on the table as we bring truth engineering to bear using knowledge based engineering as an initial framework. Let's look at one book that will be of use for this work. It's his book with Infeld (so, E&I) with the title of The Evolution of Physics using archive.org's copy. The authors briefly start with the briefs and come through the usual trek of Galileo and Newton on down the pike. They look at the rise and fall of the mechanical view after which, of course, we get the introduction of "field and matter" that comes from the advent of relativity. They go through the ideas of (and which contributed to) special and general and finally end with a view of the quantum work that somewhat merges (interlaces) with relativity. 

Okay, we all know of the visual arguments that Einstein used (train, elevator, and others). The book has no mathematics, so the authors grunge out explanations using words. It works. On the other hand, there is something to what Feynemann mentioned with respect to not being able to understand quantumness without mathematics. 

I say, mathematics drove the development of the thoughts. We will spend a lot of time there. BTW, if there seems to be a different tone to these posts, that's quite observant. If not, that's okay, too. But, the work to date was mostly surveying all of the thoughts, as expressed over the past few centuries that relate to the themes associated with deciding what's true. That issue became imperative as the computer evolved, but we did not. 

Want an opinion? We have bent over backwards over the past few decades to make the computer look good. Oh yes. The biggest argument for this? The descent brought by misguided development of the web and its muddy cloud which was then fed into ML (machine learning) in order to become some sort of omni presence or know-it-all or what have you. How about overall jerk (3rd derivative, if you're an engineer)? 

But, back to E&I. I have been through the book several times. Why? I had it for some time and mostly dabbled (as Koestler mentioned with respect to library angels). Of late (blame GenAI's rise and verge toward a fall), I took the time to step through E&I's logic. Plus, having used GenAI to probe somewhat into the workings that were hidden, the imperative nature became apparent. So, we'll get through the whole of it. 

For now, I just realized that E&I mentioned television. The book is about 1938 in emergence to public access. E died in the 1950s. The book was republished. I said that he looked to make changes to bring it up to date and only did a few. So, the book stands as a point in time reference to needed discussion. 

See pgs 189 and 190 for the discussion of an inertial frame and its clock. E&I lay out how to pick some central position and try to synchronize. Remember, this was from the 1930s. But, there were rudimentary sets which showed the concept. Now, jump ahead. We have the internet with its cloud and more. Too, we have sophisticated clocks that can sync across the country. When we watch a game in the U.S., everyone seems to consider that the variability is minor with respect to delays and such. After all, consider the work put into supporting real-time streaming. 

It was good enough for on-line (say Zoom) meetings during the pandemic. People walk with phones to their ears (some drive, too) and seem to think that they're having a real conversation with little delay in the signals. Oh yes, for those who may want to know, we'll include some technical sidelines. 

For instance, the cloud? It's a bunch of services that are provided via distributed (some very much far apart) servers running synchronized  algorithms using high-speed connections. Oh wait? Everyone knows of Nvidia and its $trillions evaluation according to the ca-pital-sino (my neologism - see Fedareated). Right? That whole affair handles the relativity issue quite well, at least ostensibly. You know, we could be picky (and might due to the importance of the topic) and point out issues.  

But, that's later. People need their games even if it's like gambling and pulls money from the pocket and one's family needs. Same goes for GenAI which was done without due concern for things that are important. Don't worry, we'll get there. 

No, remember, E&I are not the only players in truth engineering. We're targeting that due to the interest in things quantum on the part of the computer people who like the ca-pital-sino and seem to want to have control more than is necessary. 

People's rights are being trampled with insidious algorithms. But, then, it's early yet. Will there be improvement or further decline. 

So, with that digress, let's put this off until tomorrow or after. 

Remarks: Modified: 03/21/2024

03/21/2024 -- 

Saturday, February 3, 2024

Assessments, in general

We have done this blog, in concert with 7'oops7, since 2007. We started Fedaerated in 2009. Then, in 2010, we started the TGS blog (last post, Technology assessments ...). For the Thomas Gardner Society, Inc. (TGS), as well, we had some publications (Gardner's Beacon). In 2015, we went over to Quora and have been there since. Various excursions were related to using other modes, such as WordPress. 

But, we're back to Google and its Blogger. Why? Of all of the xnn/LLM systems that we looked at (granted, the survey was not exhaustive), we liked Bard the best. So, we'll work in that environment. At the same time, we'll touch all of the to other resources on a regular basis. It's just that we'll use this for publishing where we want to support mobile devices. 

Of late, we have tried LinkedIn's approach which we look at below. There we published the first of a series. We just completed the third piece what is expected to be five pieces. What we call AIn't is the motive. Truth engineering has been on our table for a long while but was problematic. What did it mean and for what motive is it necessary and why just this? 

2023 brought the answer to the questions. Answers have been embedded within these posts all of this time. So, the effort was right on. Let's just say that I was being exhaustive in a way that can only be understood in the academic sense as being highly multidisciplinary. I was retired and had the time and the interest and the ability. 

There is no hubris in that comment; rather, it's a nod to a continuation of what I was doing anyway - advanced computional systems - without having the time to be thorough. So, it's time to pull all of the factors together so that we can analyze and perhaps predict and propose changes. For me, being successful at the complete survey of the western civilization's influence on the world is enough of a motive. However, the world is large, so we need to bring into the discussion all other cultures. 

Notice that the TGS is based upon the history of the U.S. as an offshoot of the U.K. Be that as it may, we know that New England was not alone. Too, we know the prior history of these regions. Albeit, lessons learned or not over 400 years require attention. BTW, the U.S. started 250 years ago, almost, so that event will get regular mention. 

Okay, after seeing ChapGPT and noting the issues, I began discussion with Larry Walker who had run the Knowledge Systems Center for Sperry Univac for which I worked. The focus on "knowledge" was strong for both of us. That was not what we were seeing with the machine learning systems being pushed out. We can discuss that at any level. For now, these posts will be setting the stage for discussions that are pending and necessary. 

I became aware of ChatGPT in February of 2023 which was late. I wrote several posts on it and its cohorts and other topics that relate. But, in December, I published on LinkedIn the 1st of a series. It was also presented in PDF and at WordPress. The title: Artificial intelligence, not solely machine learning (AI, not solely ML). The 2nd of the series continued the theme: Knowlege and truth. 

Neither of these seemed to be of importance of the work that went into xNN/LLM. Why say that? It's been by career to work with these issues operationally (real time, industrial environments); at the same time, my private work (autodidact, by nature) covered the bases with respect to what's at play and of scope. Essentially, "theoretical" is my middle name. 

I just finished the 3rd of the series: Physicalness and mathematics. There will be one more (4th) that deals with the emergence, and surgence, of machine learning during the past two decades. As well, we have to look at data and decisions as a key topic with regard to computing and technology. That is, people, in general, need to know. Experts? We'll address their issues, too. After the 4th, we'll do a 5th which will deal with "What's next?" with all of this stuff. 

The series is based upon experience in KBE as it functioned as the basis for truth engineering. So, it's more oriented toward those operationally involved with engineering and science. I'll turn around and write this series (condensed) for the general public. 

So, here is the series: 

Notes: 

* "truth engineering" coined by David E. Jakstis in discussion of a white paper by John M. Switlik on their joint work with computational modeling for fabrication of forgings and castings. 

** This is open to public read. Some Linkedin pages may require account.       

--------------------------------

In tems of history, the U.S. D.O.D. supported early work in artificial intelligence. As well, data science was a common program. It is of interest to future discussion to point to the recent guidelines for digital enginering. We have made reference to "digital twin" which is an important concept that must be brought into the discussion related to advanced computing. We will have this document a major piece of our bibliography. 

Digital engineering

Remarks: Modified: 02/04/2024

02/04/2024 --  Linked this to 7'oops7 (AI, not ML solely) which will bear directly on the 777 project underlying KBE which relates to truth engineering as AI when it matures. Notice: Shattered dreams


Monday, January 1, 2024

New Year, 2024

Notice the start of decline in our activity here when we moved to Quora (2015). 

Here is a list of posts. 

We will be active in 2024.  











Remarks: Modified: 01/01/2024

01/01/2024 --  

Saturday, November 11, 2023

Forging examples

The last post reflected on the passing of David E. Jakstis who was a friend of the concept of truth engineering (focus of this blog). In that post, there was some description of David's project which dealt with using metals to make critical parts for commercial airliners. In the parlance of the systems approach, KBE (below), David was the domain expert. He knew metals, their uses, manufacturing requirement specification and a lot more. The author of this post was the systems expert applying the KBE methods, in particular, and handling development of the modeling and algorithms behind the "intelligent" decisions. The particular project was RFD (below) that applied the KBE methodology which can be used to explain the motivation for the "truth engineering" as well as to describe its development.  

After a brief pause to acknowledge the past year, we will look a little at KBE and RFD. Then, we will show two forging examples. The first is a large part and was of the type usually handled by RFD. The second is more recent and was done with a modern development method and illustrates the end goal which is a part. The example, also, provides a look at the result of improving a process. For those interested, 3D printing came into play in this new way. We were looking at that three decades ago.  


Aside

Last year, we saw xNN/LLM systems appear in the world. An example would be ChatGPT, but there are others. With this exposure, we will be able to (can) start to summarize the impact of those systems and how they fit into the total scheme of AI which would include past modes. One of these modes that continues to today is the general knowledge based systems work, sometimes referred to as expert systems. In short, as a consequence of looking at this work, we expect to cover the history of AI in depth. Many others have a similar goal, so we will be able to reference these looks at AI. 

Our continuing theme will be integrative. As we look at the motivations for approaches to software and consider details of a particular focus, we always note that tradeoffs had been made. Our goal is to see how these pertain to limits which can be identified and which, once known, need to be respected. 


What is KBE?

Knowledge Based Engineering (KBE) came out of early AI and has an engineering focus. There are many varieties to the discipline which looks to raise the level of sophistication of support that an engineer gets from a computer. The variety addressed in this case applies constraint satisfaction to facilitate resolution of difficult choices that come with complex systems development. In this case, we used a Lisp-based system called ICAD. The page on Wikipedia for this system, ICAD (software), like all of Wikipedia pages, has a "Talk" tab. 

Aside: The author has been involved in developing both of these pages. 

Since ICAD was bought and shelved so that a vendor could push their own product, material is not readily available to show details. We can discuss outputs and results. In this case, the "Talk" tab has a section titled "Real example needed" with a photo showing parts done by the forging process. Let's use this photo next. 


What is a forging? RFD?

The photo that was placed on Wikipedia was derived from photos on the site, The machines that made the Jet Age. In purviewing the site's page, one can appreciate how the old technique of forging metal has kept up with advancement in technology. 

Aside: At the site, consider the size of the machinery that is involved. Growth in demand for increased pressure during the forging process is one factor.  

David's, and my interest, with RFD (Rapid Forging Design - below) was to support this work with proper modeling, so the focus of our work was the computer and its ways. As the photo shows, one forges to get to a near-net situation. Then, machining, like one sees with the work of a sculptor, gets the part to the desired condition. In modern manufacturing, CNC machines do this work. 

With respect to the photo, the top shows the part after the forge step. The net part is in the lower part. The approach reduces waste since the final step has to remove less metal. Too, the properties can be controlled by the design of the forging die (RFD, next sentence). Testing, even destructive types, could be done by adding in tabs at critical points. 

For the most part, we had the metals expert, David. We also had an engineer who was familiar with forging science and design. His parametric approach helped define a computer system that allowed views from the design model (CAD and the database controlling the design) to be marked up with values that transformed into instructions to guide the RFD's building of the die.  


KBE and RFD

This approach was not accomplished by explicit invocation of rules. Rather, we used model-based reasoning with constraint satisfaction (CSat) as the primary control mechanism. The modelling handled the transforms between the CAD part and the operational views. Then, construction occured which wrapped the CAD "net" part with the envelope of the forging die. A requirement? That "envelope" which represented the form of the die had, additionally, to meet constraints of the solid modeler.  

In this type of process, CSat was the adminstrator, not unlike the OS of the computer. But, as well as control, it handled relationships and resolved the explicit and implicit conflicts through resolution which was similar to that used by rule systems. We will provide examples, as we go along. As well as the model and constraints, ICAD acted as a geometric modeler. 

That is where my work came in which was keeping representional conflicts at bay. That was a mathematical/modeling problem which can be difficult to solve in a heterogenous environment. We did local modifications of the die geometry to effect agreement (not unlike lining kids up in formation in the early grades as they learn boundaries about themselves and others). The fixup could be done in the background as the approach was applied generally for several reasons, including handling complaints by the solid modeler. 

Aside: Since that time, interest in stability of these types of processes has switched attention to more homogeneous modes. But, at what cost? (Aside: several which I will discuss under the guise of truth engineering) In this case, both the exterior and the interior of the forging die were modified; the interior was the boundary of the near-net condition that was expected to result from the forging operation. The project doing this representational work was titled Multiple Surface Join and Offset (MSJO) which encompasses the general problem set that remains full of open issues when one is dealing with natural objects (which are heterogeneous). Hence, truth engineering deals with the issues, known resolutions, uncertainties, tradeoff discussion, and overall management of expectations.

Aside: One of my favorite books deals with open issues in topology. It's hundreds of pages and dense. The motive for the book was to identify possible projects for PhD students. As well, I have a book that merely looks at some of the hugely believed aspects of topology. Look everywhere, and you'll see reliance of understandings of topology that do not necessarily hold up. Doubt me? See messes. I have a litany that I have done from watching industry types run down some perdition-laden path. Anyway, that little book provides examples of counter examples with regard to the decision-supporting notions of continuity, completeness, and more. AIn't developers are culpable of this oversight. So what? Well, I saw this over three decades ago being a mathematical economist working in engineering support from the perspective of advanced computing. There has been progress that is noticable. For any of those, let me come look at what you might have done incorrectly which is a potential disaster waiting to happen. Of course, others are aware, too. Thankfully, the internet will allow proper discussion.   

What is the structural part of the above example? It is not identified, but, in terms of the application of KBE, many parts were designed or had their design enhanced by the method. Here is a site showing definitions: Basic aircraft structures

Forgings in the future?

We have to ask, what is the future of the forging method? A forging expert provides an appropriate view

  • "Forging continues to be recognized as the premiere thermomechanical process. Not only to shape metals, metal matrix and metal composite materials, but to refine and transform the metallurgical structure as well. Forging achieves both durable, reliable component shapes and the need for engineered metallurgy to meet specific product requirements."
We can look at another approach that has been offered to replace forging. But, first, let's consider the major claimant of the day who really is problematic at its core (one might reasonably say: fakery factory). One of our goals? Explain what is the problem, why it exists, and what ought we do. And, metal modeling is a great framework to discuss (and to demonstrate - as science in the past did with small experiments) the associated issues. 


What is AI?  

One thing ought to be clear, AI is not that which relates solely to machine learning. This can be seen by reviewing those earlier projects more closely. This post deals with a problem of major scope which is handling AI (huge, multifaceted affair) going forward by bringing into the discussion insights from past accomplishments which need attention due to their success in performing (resolving intractable problems). They never got attention since they were not seen and were managed in the non-academic environments that are everywhere (doing the marvels that we all expect in our comfortable present). 

There is another motive. Looking at the technical aspects from another view. Applications like RFD had their own value even if the scope was local and specific to geometric modelling. Lots of effort goes into building and using systems, in general, both on and by computers. This will not stop. However, much of the work (say Computer Science) is academic. This series will look at commercial efforts that successfully resolved complexity problems much like we see facing and being, somewhat, handled by machine learning (xNN/LLM). But, these were never really made known. 

 Again, truth engineering will be more widely discussed. Tradeoffs are broadly demanded; that does not mean cutting corners and cheating. 


An example of a forging replacement

In the example for ICAD (see Wikipedia "Talk" page), a critical part was used with photos of parts after the forging step and when finished. See this article:

Norsk Supplying FAA-Approved 3DP Ti Parts to Boeing | New Equipment Digest 

This photo is a composite of the slides (at Norsk's site). One thing to notice is that this is a much smaller part than the ones shown in the above example of major structural pieces. This smaller part still carries a structural responsibility. Basically, it ties together structural pieces that are fabricated with a "composite" construction. For the larger part, the forging die does one part. In this case, one can put together several of the parts with a die. These parts would be separated and finished as seen in the lower part of the picture. 

A major benefit of forging was control of part properties to meet critical needs. But, each part then needs to be freed from the excess material. One constraint in RFD was to minimize this excess. In the below example, the smaller part went to a near-net state using a modern approach, 3D printing. One advancement which allowed this was the "plasma" assisted fusion of metal a layer at a time where the material was extruded with sufficient quantity to accumulate quickly. 

Mostly we think of 3D printing, even with plasma technology, as forming with a smaller increment of material and by providing the net part. In critical parts, though, years of experience has helped establish processes that go to near net with the proviso that known machining steps will do the finishing. This part was a demonstration of obtaining part properties without forging and encourages further work. 


So what?

Does the change represented by this example bear on the future of truth engineering? Of course. This example represents the unceasing striving by humans for improvement, albeit there are many factors to bring to judgment in this regard. And, truth engineering was formulated in the time when computational systems were becoming more mature, sophisticated and effective. It framed itself within the interactive aspects that continue to today, even to the situation of the "cloud" and its nebulous state of affairs. Metals and their handling continues to be focal to progress. 

All around are many possible avenues for advancement. Yet, what the situation that founded truth engineering allowed us to see still exists, albeit with a more complicated nature. Truth engineering is one factor in a multi-pronged effort at riding the one beast or the several that technology has thrown our way. There are others factors and approaches. Our interest is to get the details expressed for review as well as to foster the necessary discussions and operational choices going forward. An advantage that has accrued? Being non-academic in nature will allow aspects that have more nuance than generalization allows us to consider to be given their due attention.

Remarks: Modified: 01/15/2024

11/13/2023 -- Restatements for clarity. 

11/24/2023 -- Spelling (typos), couple of words. 

01/15/2024 -- To follow the work, see the TruthEng blog


Thursday, September 14, 2023

David E. Jakstis

David E. Jakstis and his support was seminal to the development of "truth engineering" which is twenty-three years old and becoming more apropos to the situation of computing than before. I will be getting into details as I expand upon the subject. But, first this:

David E. Jakstis  

  1 May 1952 - 13 Sep 2023   
Casting/Forging expert    
Boeing and Spirit Aerosystems

David Jakstis (LinkedIn) 

Patents 

Obituary 

Over time, we will get into more details about the circumstances that brought David and I together. For now, we can briefly discuss a Knowledge Based Engineering (KBE) project whose accomplishments are apropos to evaluating the new world of AI. The past ten months have brought attention worldwide to the potential for computers to be smart. Though, lots of other reactions have been observed, many of these are not without a basis. Troubling reports come about daily, now. 

In the U.S., the business leaders were appealing in D.C. for discipline. Like kids with their hats in hand after mischief making (see older movies). And, not unlike the bankers sitting at a rogue's table in the context of the 2008 downturn. 

Part of the problem with computing is a lack of grounding in the true philosophical sense. We will get to that. David, on the other hand, worked in an environment that had to produce metal parts with defined properties. Skipping details, again, the era of thirty years ago was seeing lots of new approaches being done via computer with resulting issues causing people to tear their hair out. Now, the talk is of algorithms. Even worse issues, folks (take from an old guy who has been involved for a long time). 

Say, in AI, the "I" really relates to algorithms in action (very sophisticated ones, to boot). 

The context then was computer modeling across engineering disciplines and included aspects of physics and the necessary computational mathematics. David Jakstis and Bil Kinney had developed a means to generate a model of a forging die through specifying a few parameters in a graphical/textual mode. These inputs guided an "intelligent" approach on the computer that resulted in the geometry needed for a closed volume which represented a die (tool). That tool was then built and used in operation. The result was an entity that was very near to the net part. 

Meaning, some operations on the forging after it was created resulted in the part to be used. Skipping some detail, there were many steps in this process where outputs were not mathematically optimized. That is where my project came into the scope. Its title was "Multiple Surface Join and Offset," but that long name involved lots of different aspects of creating a usable computer model. 

In this type of affair, "truth" can have many meanings that are situationally determined. People can balance this type of thing; computers need more homogeneity. Heard of data science? One huge problem for that wish deals with data not being nicely configured for the operations that are desired. That topic will be addressed later. 

Similarly for casting, there were steps to create the model for the casting form to be used to create an almost-net part. There were common themes, as geometry was being handled by NURBS that are a standard type of modeling. But, the data differed as well as the process. Forging operations use heated ingots. Casting deals with control of flow and cooling. 

And, measurement was a common theme. Casting and forging provided plenty of examples with which to find problems, understand issues, and make changes to meet the overall goal. But, there are many other type of parts and materials. Needless to say, KBE methods became commonly used. 

And, the knowledge base approach, itself? This technique has been in use since the inception of the method. That will be discussed further. That is, lots of work has been done under the cover over the years. These deserve honorable mention. Too, lessons from these techniques need to be adopted. 

The larger picture is how do computers relate or ought to, to the world (phenomenon)? I will miss discussing this with David. In 2000, I gave him a white paper on the subject that I had written. After reading, he noted that it sounded like "truth engineering" which had the proper ring. So, the concept stuck. 

The reality of the situation? I can talk about twenty years of watching the world and the involvement of computing, across the board. Every five years brought more and more examples of problems being poorly understood. In fact, this recent set of events that has AI associated with it brought to bear several issues that are untenable without serious intervention. By whom? And how? There are many other questions. 

What David and I were working on decades ago can apply. From my perspective, there are many other  activities over the past two decades that need attention. A major change? The internet came to fore while David and I were doing the early work; it has matured enough to enable greater efforts than we could have imagined. 

As I proceed, I will regularly mention David's contributions as the basis for doing the proper analysis. 

----

Note: Work extending truth engineering in terms of computational modeling. 

Remarks:   Modified: 01/15/2024

09/18/2023 -- Added photo and link to ongoing work. 

09/21/2023 -- Added link to obituary.  

09/30/2023 -- David as an honorary member of the Thomas Gardner Society, Inc

11/11/2023 -- Using forging examples as a motivation for discussing the multi-pronged nature of truth. 

01/15/2024 -- To follow the work, see the TruthEng blog

Wednesday, August 23, 2023

Alchemy lives

I first heard of Herbert L. Dreyfus while listening to a discussion about artificial intelligence and databases at a conference that was sponsored by the "very large-database" group. The meeting was in the 1980s and was held at Kyoto, Japan. The reactions were varied, but one could see the positions being taken. He didn't seem to have many friends there defending him. 

Okay, leap forward. Looking further, I agreed with the guy. However, at the time, my focus was on implementation of algorithms for knowledge based systems (and, knowledge base engineering modes) that were highly effective in providing solutions that mattered. Needless to say, subsequent work involved a broader scope for computing that suggested its potential. 

Ubiquity? The concept was not unknown. Hoever, the release of IP changed the whole tone. That was in the mid-1990s. Since then, we have had several cycles of boom and bust. The first one? Go look up the tech bust of 1999/2000 to read about one. 

There were others before and after 2000. A couple of the ones before related to artificial intelligence. This post provides a brief summary of Dreyfus's involvement in the discussions. 

Now, the theme of this post. Here is a little blurb from Bard (Google's xNN/LLM). 
  • Business is often seen as a way to make money, and in some cases, this can lead to people trying to get something for nothing. For example, some businesses may engage in deceptive or fraudulent practices in order to make a profit.
Oh, was this prompted? Sure. The idea was to tell it that the ca-pital-sino (coined in a sister blog in a post on Tuesday, 26 Jan 2010 - Shell games) deals specifically with this issue. 

Remarks: Modified: 08/23/2023

08/23/2023 --