Wednesday, July 23, 2008

skip to main | skip to sidebar
Andres Agostini's 'In This I Believe!' (AAITIB), USA / Welcome to a Brainy Future, right now!

Andres Agostini is a Researching Analyst & Consultant & Management Practitioner & Original Thinker & E-Author & Institutional Coach. Topics subject of in-depth study & practice are Science, Technology, Corporate Strategy, Business, Management, “Transformative Risk Management,” Professional Futurology, & Mind-Expansion Techniques Development. He hereby shares his thoughts, ideas, reflections, findings, & suggestions with total independence of thinking and without mental reservation.
Friday, March 14, 2008
Andres Agostini's "On This I Believe", Arlington, Virginia, USA



Andres Agostini (Ich Bin Singularitarian!)

Executive Associate for Global Markets

OMEGA SYSTEMS GROUP INC.

Arlington, Virginia, USA



Andy's Blogging and Beyond....

Future Shape of Quality (Andy’s blogging)

“I like the dreams of the future better than the history of the past.” (Jefferson). In a world –once called the “society of knowledge”- that is getting (society, economics, [geo] politics, technology, environment, so forth) more and more sophisticated in over-exponential rates. Ray Kurzweil in “The Singularity is Near” assures that, mathematically speaking, the base and the exponent of the power are increasingly chaotically jumping, almost as if this forthcoming “Cambrian explosion,” bathed with the state of the art applied will change everything.

Friedrich Wilhelm Nietzsche, the German philosopher, reminds one, “It is our future that lays down the law of our work.” While Churchill tells us, “the empires of the future belong to the [prepared] mind.”

Last night I was reading the text book “Wikinomics.” Authors say that in the next 50 years applied science will be much more evolved than that of the past 400 years. To me, and because of my other reaserch, they are quite conservative. Vernor Vinge, the professor of mathematics, recalls us about the “Singularity,” primarily technological and secondarily social (with humans that are BIO and non BIO and derivatives of the two latter, i.e. in vivo + in silico + in quantum + in non spiritus). Prof. Vinge was invited by NASA on that occasion. If one like to check it out, Google it.

Clearly, Quality Assurance progress has been made by Deming, Juran, Six Sigma, Kaisen (Toyota) and others. I would pay strong attention to their respective prescriptions with an OPEN MIND. Why? Because SYSTEMS are extremely dynamically these days, starting up with the Universe (or “Multiverse”). As I operate with risks and strategies –beyond the view of (a) strategic planner, and (b) practitioner of management best practices à la non ad hoc “project management,” I have to take advantage of many other methodologies.

The compilation of approaches is fun though must be extremely cohesive, congruent, and efficacious.

And if the economy grows more complex, many more methodologies I will grab. I have one of my own that I called “Transformative Risk Management,” highly based on the breakthrough by Military-Industrial (-Technological) Complex. Chiefly, with the people concerned with nascent NASA (Mercury, Saturn, Apollo) via Dr. Wernher von Braun, then engineer in chief. Fortunately, my mentor, a “doctor in science” for thirteen years was von Braun’s risk manager. He’s now my supervisor.

The Military-Industrial (-Technological) Complex had a great deal of challenges back in 1950. As a result, many breakthroughs were brought about. Today, not everyone seems to know and/or institute these findings. Some do as ExxonMobil. The text book “Powerful Times” attributes to U.S. defense budget a nearly 50% of the totality of the worldwide defense budgets. What do they do with this kind of money? They instill it –to a great extent- to R&D labs of prime quality. Afterwards, they shared “initiatives” with R&D labs from Universities, Global Corporations, and “Wiki” Communities. Imagine?

In addition, the grandfather of in-depth risk analyses is one that goes under many names beside Hazard Mode and Effect Analysis (HMEA). It has also been called Reliability Analysis for Preliminary Design (RAPD), Failure Mode and Effect Analysis (FMEA), Failure Mode, Effect, and Criticality Analysis (FMECA), and Fault Hazard Analysis (FHA). All of these – just to give an example – has to be included in your methodical toolkit alongside with Deming’, Juran’, Six Sigma, Kaisen’s.

These fellow manage with what they called “the omniscience perspective,” that is, the totality of knowledge. Believe me, they do mean it.

Yes, hard-working, but knowing what you’re doing and thinking always in the unthinkable, being a foresight-er, and assimilating documented “lesson learned” from previous flaws. In the mean time, Sir Francis Bacon wrote, “He that will not apply remedies must expect new evils; for time is the greatest innovator.”

(*) A "killer" to "common sense" activist. A blessing to rampantly unconventional- wisdom practitioner.

For the “crying” one, everything has changed. It has changed (i) CHANGE, (ii) Time, (iii) Politics/Geopolitics, (iv) Science and technology (applied), (v) Economy, (vi) Environment (amplest meaning), (vii) Zeitgeist (spirit of times), (viii) Weltstanchaung (conception of the world), (ix) Zeitgeist-Weltstanchaung’s Prolific Interaction, etc. So there is no need to worry, since NOW, —and everyday forever (kind of...)—there will be a different world, clearly if one looks into the sub-atomic granularity of (zillion) details. Unless you are a historian, there is no need to speak of PAST, PRESENT, FUTURE, JUST TALK ABOUT THE ENDLESSLY PERENNIAL PROGRESSION. Let’s learn a difficult lesson easily NOW.

“Study the science of art. Study the art of science. Picture mentally… Draw experientially. Succeed through endless experimentation… It’s recommendable to recall that common sense is much more than an immense society of hard-earned practical ideas—of multitudes of life-learned rules and tendencies, balances and checks. Common sense is not just one (1), neither is, in any way, simple.” (Andres Agostini)

Dwight D. Eisenhower, speaking of leadership, said: “The supreme quality for leadership is unquestionably integrity. Without it, no real success is possible, no matter whether it is on a section gang, a football field, in an army, or in an office.”

“…to a level of process excellence that will produce (as per GE’s product standards) fewer than four defects per million operations…” — Jack Welch (1998).

In addition to WORKING HARD and taking your “hard working” as you beloved HOBBY and never as a burden, one may wish to institute, as well, the following:

1.- Servitize.

2.- Productize.

3.- Webify.

4.- Outsource (strategically “cross” sourcing).

5.- Relate your core business to “molutech” (molecular technology).

Search four primary goals (in case a reader is interested):

A.- To build trust.

B.- To empower employees.

C.- To eliminate unnecessary work.

D.- To create a new paradigm for your business enterprise, a [beyond] “boundaryless” organization.

E.- Surf dogmas; evade sectarian doctrines.

Posted by Andres Agostini at February 27, 2008 7:54 PM

On the Future of Quality !!!

"Excellence is important. To everyone excellence means something a bit different. Do we need a metric for excellence? But, Why do I believe that the qualitative side of it is more important than its numericalization. By the way, increasing tsunamis of vanguard sciences and corresponding technologies to be applied bring about the upping of the technical parlance.

These times as Peter Schwartz would firmly recommend require to “pay” the highest premium for leading knowledge.

“Chindia” (China and India) will not wait for the West. People like Ballmer (Microsoft) and Ray Kurzweil insist that current levels of complexity –that one can manage appropriately and timely- might get one a nice business success.

Yes, simple is beautiful, but horrendous when this COSMOS is overwhelmed with paradoxes, contradictions, and predicaments. And you must act to capture success and, overall, to make sustainable.

Quality is crucial. Benchmarks are important but refer to something else, though similar. But Quality standards, as per my view, would require a discipline to be named “Systems Quality Assurance.” None wishes defects/waste.

But having on my hat and vest of strategy and risk management, the ultimate best practices of quality –in many settings- will not suffice. Got it add, (a) Systems Security, (b) Systems Safety, (c) Systems Reliability, (d) Systems Strategic Planning/Management and a long “so forth.”

When this age of changed CHANGE is so complex like never ever –and getting increasingly more so- just being truly excellent require, without a fail, many more approaches and stamina."

Posted by Andres Agostini at February 22, 2008 9:18 PM

Posted by Andres Agostini on This I Believe! (AATIB) at 6:25 PM 0 comments

Labels: www.AndresAgostini.blogspot.com, www.andybelieves.blogspot.com

Commenting on the Future of Quality….

Excellence is important. To everyone excellence means something a bit different. Do we need a metric for excellence? But, Why do I believe that the qualitative side of it is more important than its numericalization. By the way, increasing tsunamis of vanguard sciences and corresponding technologies to be applied bring about the upping of the technical parlance.

These times as Peter Schwartz would firmly recommend require to “pay” the highest premium for leading knowledge.

“Chindia” (China and India) will not wait for the West. People like Ballmer (Microsoft) and Ray Kurzweil insist that current levels of complexity –that one can manage appropriately and timely- might get one a nice business success.

Yes, simple is beautiful, but horrendous when this COSMOS is overwhelmed with paradoxes, contradictions, and predicaments. And you must act to capture success and, overall, to make it sustainable and fiscally sound.

Quality is crucial. Benchmarks are important but refer to something else, though similar. But Quality standards, as per my view, would require a discipline to be named “Systems Quality Assurance.” None wishes defects/waste.

But having on my hat and vest of strategy and risk management, the ultimate best practices of quality –in many settings- will not suffice. Got it add, (a) Systems Security, (b) Systems Safety, (c) Systems Reliability, (d) Systems Strategic Planning/Management and a long “so forth.”

When this age of changed CHANGE is so complex like never ever –and getting increasingly more so- just being truly excellent require, without a fail, many more approaches and stamina.

Posted by Andres Agostini at February 22, 2008 9:18 PM

Posted by Andres Agostini on This I Believe! (AATIB) at 6:36 PM 0 comments

Labels: www.AndresAgostini.blogspot.com, www.andybelieves.blogspot.com

Comments: Hard Work Matters

"Clearly, hard work is extremely important. There is a grave lack of practices of this work philosophy in the battlefield. Practicing, practicing and practicing is immeasurably relevant.

Experience accumulated throughout the years is also crucial, particularly when one is always seeking mind-expansion activities.

With it practical knowledge comes along. When consulting and training, yes, you’re offering ideas to PRESENT clients with CHOICES/OPTIONS to SOLUTIONS.

How to communicate with the client is extremely difficult. Nowadays, some technical solutions that the consultant or advisor must implement has a depth that will shock the client unless there is a careful and prudent preparation/orientation of the targeted audience.

Getting to know the company culture is another sine qua non. The personal cosmology of each executive or staff involved on behalf of the client is even more important. Likewise, the professional service expert must do likewise with the CEO, and Chairman.

In fact, in your notes, a serious consultant must have an unofficial, psychological profile of the client representatives. One has to communicate unambiguously, but sometimes helps to adapt your lexicon to that of the designated client.

From interview one –paying strong attention and listening up to the customer– the advisor must give choices while at always being EDUCATIONAL, INFORMATIVE, and, somehow, FORMATIVE/INDUCTIVE. That’s the problem.

These times are not those. When the third party possesses the knowledge, skill, know-how, technology, he/she now must work much more in ascertaining you lock in your customer’s mind and heart with yours.

Before starting the CONSULTING EFFORT, I personally like to have a couple of informal meetings just to listen up and listen up.

Then, I forewarn them that I will be making a great number of questions. Afterwards, I take extensive notes and start crafting the strategy to build up rapport with this customer.

Taking all the information given informally in advance by the client, I make an oral presentation to assure I understood what the problem is. I also take this opportunity to capture further information and to relax everyone, while trying to win them over legitimately and transparently.

Then, if I see, for instance, that they do not know how to call/express lucidly/with accurateness their problem, I ask questions. But I also offer real-life examples of these probable problems with others clients.

The opportunity is absolutely vital to gauge the level of competency of the customer and knowledge or lack of knowledge about the issue. Passing all of that over, I start, informally, speaking of options to get the customer involved in peaking out the CHOICE (the solution) to watch for initial client’s reactions.

In my case and in many times, I must not only transfer the approaches/skills/technologies, but also institute and sustain it to the 150% satisfaction of my clients.

Those of us, involved with Systems Risk Management(*) (“Transformative Risk Management”) and Corporate Strategy are obliged to scan around for problems, defects, process waste, failure, etc. WITH FORESIGHT.

Once that is done and still “on guard,” I can highlight the opportunity (upside risk) to the client.

Notwithstanding, once you already know your threats, vulnerabilities, hazards, and risks (and you have a master risk plan, equally contemplated in your business plan), YOU MUST BE CREATIVE SO THAT “HARD WORK” MAKES A UNIQUE DIFFERENCE IN YOUR INDUSTRY.

While at practicing, do so a zillion low-cost experiments. Do a universe of Trial and Errors. Commit to serendipity and/or pseudo-serendipity. In the mean time, and as former UK Prime Minister Tony Blair says: “EDUCATION, EDUCATION, EDUCATION.”

(*) It does not refer at all to insurance, co-insurance, reinsurance. It is more about the multidimensional, cross-functional management of business processes to be goals and objectives compliant."

Posted by Andres Agostini at February 23, 2008 4:56 PM

Posted by Andres Agostini on This I Believe! (AATIB) at 1:58 PM 0 comments

Labels: www.AndyBelieves.blogspot.com/

Future Shape of Quality

“I like the dreams of the future better than the history of the past.” (Jefferson). In a world –once called the “society of knowledge”- that is getting (society, economics, [geo] politics, technology, environment, so forth) more and more sophisticated in over-exponential rates. Ray Kurzweil in “The Singularity is Near” assures that, mathematically speaking, the base and the exponent of the power are increasingly chaotically jumping, almost as if this forthcoming “Cambrian explosion,” bathed with the state of the art applied will change everything.

Friedrich Wilhelm Nietzsche, the German philosopher, reminds one, “It is our future that lays down the law of our work.” While Churchill tells us, “the empires of the future belong to the [prepared] mind.”

Last night I was reading the text book “Wikinomics.” Authors say that in the next 50 years applied science will be much more evolved than that of the past 400 years. To me, and because of my other reaserch, they are quite conservative. Vernor Vinge, the professor of mathematics, recalls us about the “Singularity,” primarily technological and secondarily social (with humans that are BIO and non BIO and derivatives of the two latter, i.e. in vivo + in silico + in quantum + in non spiritus). Prof. Vinge was invited by NASA on that occasion. If one like to check it out, Google it.

Clearly, Quality Assurance progress has been made by Deming, Juran, Six Sigma, Kaisen (Toyota) and others. I would pay strong attention to their respective prescriptions with an OPEN MIND. Why? Because SYSTEMS are extremely dynamically these days, starting up with the Universe (or “Multiverse”). As I operate with risks and strategies –beyond the view of (a) strategic planner, and (b) practitioner of management best practices à la non ad hoc “project management,” I have to take advantage of many other methodologies.

The compilation of approaches is fun though must be extremely cohesive, congruent, and efficacious.

And if the economy grows more complex, many more methodologies I will grab. I have one of my own that I called “Transformative Risk Management,” highly based on the breakthrough by Military-Industrial (-Technological) Complex. Chiefly, with the people concerned with nascent NASA (Mercury, Saturn, Apollo) via Dr. Wernher von Braun, then engineer in chief. Fortunately, my mentor, a “doctor in science” for thirteen years was von Braun’s risk manager. He’s now my supervisor.

The Military-Industrial (-Technological) Complex had a great deal of challenges back in 1950. As a result, many breakthroughs were brought about. Today, not everyone seems to know and/or institute these findings. Some do as ExxonMobil. The text book “Powerful Times” attributes to U.S. defense budget a nearly 50% of the totality of the worldwide defense budgets. What do they do with this kind of money? They instill it –to a great extent- to R&D labs of prime quality. Afterwards, they shared “initiatives” with R&D labs from Universities, Global Corporations, and “Wiki” Communities. Imagine?

In addition, the grandfather of in-depth risk analyses is one that goes under many names beside Hazard Mode and Effect Analysis (HMEA). It has also been called Reliability Analysis for Preliminary Design (RAPD), Failure Mode and Effect Analysis (FMEA), Failure Mode, Effect, and Criticality Analysis (FMECA), and Fault Hazard Analysis (FHA). All of these – just to give an example – has to be included in your methodical toolkit alongside with Deming’, Juran’, Six Sigma, Kaisen’s.

These fellow manage with what they called “the omniscience perspective,” that is, the totality of knowledge. Believe me, they do mean it.

Yes, hard-working, but knowing what you’re doing and thinking always in the unthinkable, being a foresight-er, and assimilating documented “lesson learned” from previous flaws. In the mean time, Sir Francis Bacon wrote, “He that will not apply remedies must expect new evils; for time is the greatest innovator.”

(*) A "killer" to "common sense" activist. A blessing to rampantly unconventional- wisdom practitioner.

For the “crying” one, everything has changed. It has changed (i) CHANGE, (ii) Time, (iii) Politics/Geopolitics, (iv) Science and technology (applied), (v) Economy, (vi) Environment (amplest meaning), (vii) Zeitgeist (spirit of times), (viii) Weltstanchaung (conception of the world), (ix) Zeitgeist-Weltstanchaung’s Prolific Interaction, etc. So there is no need to worry, since NOW, —and everyday forever (kind of...)—there will be a different world, clearly if one looks into the sub-atomic granularity of (zillion) details. Unless you are a historian, there is no need to speak of PAST, PRESENT, FUTURE, JUST TALK ABOUT THE ENDLESSLY PERENNIAL PROGRESSION. Let’s learn a difficult lesson easily NOW.

“Study the science of art. Study the art of science. Picture mentally… Draw experientially. Succeed through endless experimentation… It’s recommendable to recall that common sense is much more than an immense society of hard-earned practical ideas—of multitudes of life-learned rules and tendencies, balances and checks. Common sense is not just one (1), neither is, in any way, simple.” (Andres Agostini)

Dwight D. Eisenhower, speaking of leadership, said: “The supreme quality for leadership is unquestionably integrity. Without it, no real success is possible, no matter whether it is on a section gang, a football field, in an army, or in an office.”

“…to a level of process excellence that will produce (as per GE’s product standards) fewer than four defects per million operations…” — Jack Welch (1998).

In addition to WORKING HARD and taking your “hard working” as you beloved HOBBY and never as a burden, one may wish to institute, as well, the following:

1.- Servitize.

2.- Productize.

3.- Webify.

4.- Outsource (strategically “cross” sourcing).

5.- Relate your core business to “molutech” (molecular technology).

Search four primary goals (in case a reader is interested):

A.- To build trust.

B.- To empower employees.

C.- To eliminate unnecessary work.

D.- To create a new paradigm for your business enterprise, a [beyond] “boundaryless” organization.

E.- Surf dogmas; evade sectarian doctrines.

Posted by Andres Agostini at February 27, 2008 7:54 PM

Comments: Snide Advertising

Advertising and campaigning must enforce a strong strategic alliance with the client. The objective is to COMMUNICATE the firm’s products, services, values, ethos in a transparent and accountable way. Zero distortion tolerance as to the messages disseminated.

Ad agencies cannot make up for the shortcomings of the business enterprise. Those shortcomings consequential of a core business sup-optimally managed. Get the business optimum first. Then, communicate it clearly, being sensible to the community at large.

A funny piece is one thing. To make fun of others is another (terrible). To be creative in the message is highly desirable. If the incumbent’s corporation has unique attributes and does great business, just say it comprehensibly without manipulating or over-promising.

Some day soon the subject matter on VALUES is going to be more than indispensable to keep global society alive. The rampant violations of the aforementioned values should be death-to-life matter of study by ad agencies without a fail.

The global climate change, the flu pandemia (to be), the geology (earthquakes, volcanoes, tsunamis), large meteorites, nuke wars are all among the existential risks. To get matters worse, value violations by the ad agencies, mass media, and the rest of the economy would easily qualify as an existential risk.

Humankind requires transparency and accountability the soonest.

Posted by Andres Agostini at February 27, 2008 8:34 PM

Comments: Where's the WOW?

Talent is absolutely a sine qua non. Nowadays – while at the over-revolution of knowledge– is even more important, in fact without precedent.

When I went to college to sign up for diverse courses, my counselor forewarned me that my education (about to commence) would have a validity (not be outdated) for the first five years of completion. I got the message clearly and have never stop to attempt to better myself.

I mentioned the above because I know many people with doctoral degrees from Harvard, Oxford, MIT that, once completed their studies, don’t read anything more that ambiguous news headlines. They think that the economy is a snapshot (static) and, therefore, not making quantum progresses. Today sci-fi has been superseded by the world-class news media alone.

Likewise, many company and countries captains worship mediocrity. It’s unbelievable how universal this is, beginning with the most advanced nations. Friedman tells his siblings that they had better study not to give away their jobs to people from China and India and Russia.

In the mean time, knowledge repository is growing to ruthless proportions. The direct consequence is for economy to get more and more automated with more and more Artificial Intelligence. I wonder if the people from China and India and Russia will give away their jobs to Asimo and other robots (now in the womb).

Should we expect “WOW” from the forthcoming robots since the subjects of mediocrity-dom are accelerating the automation described?

In 1970 a fellow by the name of Alvin Toffler in a book titled “Shock of the Future” told us many things to get prepared for in advance. How many have pay attention? Those who are not interested in the granularity (atomic/sub-atomic scale) of details and have paid no heed cannot complaint. Get ready!

Posted by Andres Agostini at February 27, 2008 9:01 PM

Comments: Future Shape of Quality

Thank you all for your great contributions and insightfulness. Take a Quality Assurance Program, (e.g.), to be instituted in a company these days, century 2008. One will have to go through tremendous amounts of reading, writing, drawing, spread-sheeting, etc. Since the global village is the Society of Knowledge, these days, to abate exponential complexity, you must not only have to embrace it fully, you have to be thorough at all times to meet the challenge. One must also pay the price of an advanced global economy that is in increasingly perpetual innovation. Da Vinci, in a list of the 10 greatest minds, was # 1. Einstein was # 10. Subsequently, it’s highly recommendable, if one might wish, to pay attention to “Everything should be made as simple [from the scientific stance] as possible, but not simpler.” ¬Albert Einstein. Mr. Peters, on the other hand, has always stressed the significance to continuously disseminate new ideas. He is really making an unprecedented effort in that direction. Another premium to pay, it seems to be extremely “thorough” (Trump).

Posted by Andres Agostini at February 28, 2008 3:11 PM

Comments: Cool Friend: C. Michael Hiam

We need, globally, to get into the “strongest” peaceful mind-set the soonest. Not getting to peace status via waging wars. Sometimes, experts and statesmen may require “chirurgical interventions,” especially under the monitoring of the U.N. diplomacy are called to be reinvented and taken to the highest possible state of refinement. More and more diplomacy and more and more refinement. Then, universal and aggressive enhance diplomacy instituted.

Posted by Andres Agostini at February 29, 2008 4:02 PM

Comments: Success Tips at ChangeThis

Comments: Success Tips at ChangeThis

I appreciate current contributions. I’d like to think that the nearly impossible is in you way (while you’re emphatically self-driven for accomplishments) with determined aggressive towards the ends (objectives, goals) to be met. Churchill offers a great deal of examples of how an extraordinary leader works out.

Many lessons to be drawn out from him, without a doubt. Churchill reminds, as many others, that (scientific) knowledge is power. Napoleon, incidentally, says that a high-school (lyceum) graduate, must study science and English (lingua franca).

So, the “soft knowledge” (values) plus the “hard knowledge” (science, technology) must converge into the leader (true statesman). Being updated in values and science and technology in century 21 –to be en route to being 99% success compliant- requires, as well, of an open mind (extremely self-critical) that is well prepared (Pasteur).

Posted by Andres Agostini at February 29, 2008 4:19 PM

Comments: Wiki Contributions

Comments: Wiki Contributions

My experience tells me that every client must be worked out to be your true ally. When you’re selling high-tech/novel technologies/products/services, one must do a lot of talking to induce the customer into a menu of probable solutions. The more the complications, the more the nice talk with unambiguous language.

If that phase succeeds, it’s necessary to make oral/document presentations to the targeted client. Giving him – while at it- a number of unimpeachable examples of the real life (industry by industry) will get the customer more to envision you as an ally than just a provider.

These continuous presentations are, of course, training/indoctrination to the customer, so that he understands better his problem and the breadth and scope of the likely solutions. If progress is made in this phase, one can start working out, very informally and distensibly, the clauses of the contract, particularly those that are daring. One by one.

When each one is finally approved by both. Assemble and get approved and implemented the corresponding contract. Then, keep a close (in-person) contact with your customer.

Posted by Andres Agostini at February 29, 2008 4:32 PM

Comments: It's Good to Talk!

I like to meet personally and working together with my peers. So, I can also work through the Web as I am on my own with added benefits of some privacy and other conveniences. A mix of both –as I think- is optimal.

How can one slow down the global economy trends? The more technological elapsed time get us, the more connected and wiki will we all be. Most of the interactions I see/experience on the virtual world with extreme consequences in the real world.

I think it’s nice and productive to exchange ideas over a cappuccino. The personal contact is nice. Though, it gets better where is less frequent. So, when it happens, the person met becomes a splendid occasion.

As things get more automated, so will get we. I, as none of you, invented the world. Automations will get to work more than machines. Sometimes, it of a huge help to get an emotional issue ventilated through calm, discerned e-mails.

Regardless of keeping on embracing connectedness (which I highly like), I would say one must make in-person meetings a must-do. Let's recall that we are en route to Vernor Vinge's "Singularity."

Posted by Andres Agostini at February 29, 2008 4:46 PM

Comments: A Focus on Talent

The prescription to make a true talent as per the present standards is diverse. Within the ten most important geniuses, there is Churchill again. He is the (political) statesman # 1, from da Vinci’s times to the current moment. In one book (Last Lion), it is attributed to Churchill saying that a New Yorker –back then–transferred him some methodology to capture geniality.

A great deal of schooling is crucial. A great deal of self-schooling is even more vital. Being experienced in different tenures and with different industries and with different clients helps beyond belief.

Study/researching cross-reference (across the perspective of omniscience) helps even more. Seeking mentors and tutors helps. Get trained/indoctrinated in various fields does so too. Hiring consultants for your personal, individual induction/orientation add much.

Got it have an open mind with a gusto for multidimensionality and cross-functionality, harnessing and remembering useful knowledge all over, regardless of the context. I have worked on these and published some “success metaphors” in the Web, both text and video. Want it? Google it!

Learning different (even opposed) methodologies renders the combined advantages of all of the latter into a own, unique multi-approach of yours.

Most of these ideas can be marshaled concurrently.

Posted by Andres Agostini at February 29, 2008 5:11 PM

Comments: New Cool Friend: Dan Roam

Pictures and exhibits and graphics are extremely VITAL, in my case, to reinforce and facilitate what I am trying to communicate. I believe that Arquimedes stressed the relevance of adding illustrations to his workings. Leonardo did so extensively. He’s a prime example of this.

The book REIMAGINE by Tom Peters does this splendidly. You seem to be holding a text book of the future with a plethora of pleasant colors, shapes, forms, symbols, and, above all, messages.

Leading The Revolution by Gary Hamel (Strategos Chairman and Professor at The London Business School) is similar to that of Tom’s. Tom has reminded his audiences to “think in slides.” This aids the thinking process immeasurably.

MindMaps by Dr. Tony Buzan is extremely fun and so pervasive. I find it so tedious to read a great book made up of only words, without frequent illustration.

Posted by Andres Agostini at March 14, 2008 1:59 PM

Comments: Snide Advertising

An ad campaign must be a project abiding by standards and ethics and values. Rule #1 is to be honest, not push/pressure your product/service at any cost (not just economically), not to incur into negative advertising. I agree that this is a mind-set and also an ingrained talent (born with).

While managing the different pieces (components, elements, phases, contexts that stem from the system, namely the “ad campaign”), everything must be unimpeachably true, verifiable, and deliverable. Otherwise, What would the incumbent be doing to his branding strategy as the public at large become increasingly disturbed by this one company?

Creativity and innovation are invited to the utmost to make the ad so relevant. Soberness, at all times, is beyond crucial. Some basics are in due place. Former U.K. prime minister, Tony Blair officious words of his chief preference, “education, education, education.” To up quality, he/she must pay heed to this essential saying.

Whatever the ad message, What does the incumbent wish to accomplish? What EXPERIENCE does the exchange presented by the “live” ad take place? That answered, Where does he expect to be in his industry in the forthcoming 5 years? I wonder!

Posted by Andres Agostini at March 14, 2008 2:29 PM

Comments: Cool Friend: C. Michael Hiam

Tom, then, as per your posting, it seems that you raised these story to offer some ideas of great leadership. Many people ask about leadership traits and how to execute it. So any further story like this allows one to place his/her mind on a greater perspective.

I believe his story is inspirational. Without the inspiring effect, there’s no leadership in due place. I, personally, scan around for all theses stories in different places. Century-21 Leaders must meet dynamic challenges that will require any and every piece of savvy insight.

Posted by Andres Agostini at March 14, 2008 2:38 PM

COMMENT TO BBC WORLD (as per an E-Survey)

We are living in extreme times. As Global Risk Manager and Scenario Strategists I know we have the technology and science to solve many existential risks. The problem is that the world is over-populated by –as it seems- a majority of psycho-stable people. For the immeasurable challenges we need to face and act upon them, we will require a majority of extremely educated (exact sciences) people who are psycho-kinetic minded. People who have an unlimited drive to do things optimally, that are visionaries. That will go all the way to make peace universal and so the best maintenance of ecology. One life-to-death risk is a nuclear war. There are too many alleged statesmen willing to pull to switch to quench their mediocre egos. If we can manage systematically, systematically, and holistically the existential risks (including the ruthless progression of science and technology), the world (including some extra-Erath stations) a promissory place. The powers and the superpowers must all “pull” at the unison to mitigate/eliminate these extraordinarily grave risks.

Andres Agostini

www.AndyBelieves.blogspot.com/

Arlington, Virginia, USA

9:32 p.m. GMT/UCT

March 14, 2008

NAPOLEON ON EDUCATION:

(Literally. Brackets are placed by Andres Agostini.

Content researched by Andres Agostini)

“….Education, strictly speaking, has several objectives: one needs to learn how to speak and write correctly, which is generally called grammar and belles lettres [fines literature of that time]. Each lyceum [high school] has provided for this ob­ject, and there is no well-educated man who has not learned his rhetoric.

After the need to speak and write correctly [accurately and unambiguously] comes the ability to count and measure [skillful at mathematics, physics, quantum mechanics, etc.]. The lyceums have provided this with classes in mathematics embracing arithmetical and mechanical knowledge [classic physics plus quantum mechanics] in their different branches.

The elements of several other fields come next: chronology [timing, tempo, in-flux epochs], ge­ography [geopolitics plus geology plus atmospheric weather], and the rudiments of history are also a part of the educa­tion [sine qua non catalyzer to surf the Intensively-driven Knowledge Economy] of the lyceum. . . .

A young man [a starting, independent entrepreneur] who leaves the lyceum at sixteen years of age therefore knows not only the mechanics of his language and the classical authors [captain of the classic, great wars plus those into philosophy and theology], the divisions of discourse [the structure of documented oral presentations], the different figures of eloquence, the means of employing them either to calm or to arouse passions, in short, everything that one learns in a course on belles lettres.

He also would know the principal epochs of history, the basic geographical divisions, and how to compute and measure [dexterity with information technology, informatics, and telematics]. He has some general idea of the most striking natural phenomena [ambiguity, ambivalence, paradoxes, contradictions, paradigm shits, predicaments, perpetual innovation, so forth] and the principles of equilibrium and movement both [corporate strategy and risk-managing of kinetic energy transformation pertaining to the physical world] with regard to solids and fluids.

Whether he desires to follow the career of the barrister, that of the sword [actual, scientific war waging in the frame of reference of work competition], OR ENGLISH [CENTURY-21 LINGUA FRANCA, MORE-THAN-VITAL TOOL TO ACCESS BASIC THROUGH COMPLEX SCIENCE], or letters; if he is destined to enter into the body of scholars [truest womb-to-tomb managers, pundits, experts, specialists, generalists], to be a geographer, engineer, or land surveyor—in all these cases he has received a general education [strongly dexterous of two to three established disciplines plus a background of a multitude of diverse disciplines from the exact sciences, social sciences, etc.] necessary to become equipped [talented] to receive the remainder of instruction [duly, on-going-ly indoctrinated to meet the thinkable and unthinkable challenges/responsibilities beyond his boldest imagination, indeed] that his [forever-changing, increasingly so] circumstances require, and it is at this moment [of extreme criticality for humankind survival], when he must make his choice of a profession, that the special studies [omnimode, applied with the real-time perspective of the totality of knowledge] science present themselves.

If he wishes to devote himself to the military art, engineering, or artillery, he enters a special school of mathematics [quantum information sciences], the polytechnique. What he learns there is only the corollary of what he has learned in elementary mathematics, but the knowledge acquired in these studies must be developed and applied before he enters the dif­ferent branches of abstract mathematics. No longer is it a question simply of education [and mind’s duly formation/shaping], as in the lyceum: NOW IT BECOMES A MATTER OF ACQUIRING A SCIENCE....”

END OF TRANSCRIPTION.
Posted by Andres Agostini on This I Believe! (AATIB) at 2:05 PM 0 comments
Labels: www.AndresAgostini.blogspot.com/, www.AndyBelieves.blogspot.com/, www.geocities.com/agosbio/a.html
Thursday, March 13, 2008
Andres Agostini's "On This I Believe", Arlington, Virginia, USA
Andres Agostini (Ich Bin Singularitarian!) -

Arlington, Virginia, USA

Posted by Andres Agostini on This I Believe! (AATIB) at 12:27 PM 0 comments
Labels: www.AndresAgostini.blogspot.com/, www.AndyBelieves.blogspot.com/, www.geocities.com/agosbio/a.html
Tuesday, March 11, 2008
Posted by Andres Agostini on This I Believe! (AATIB) at 3:31 PM 0 comments
Labels: www.AndresAgostini.blogspot.com/, www.AndyBelieves.blogspot.com/
Andres Agostini's "On This I Believe, " - Arlington, Virginia, USA

Future Shape of Quality (Andy’s blogging)

“I like the dreams of the future better than the history of the past.” (Jefferson). In a world –once called the “society of knowledge”- that is getting (society, economics, [geo] politics, technology, environment, so forth) more and more sophisticated in over-exponential rates. Ray Kurzweil in “The Singularity is Near” assures that, mathematically speaking, the base and the exponent of the power are increasingly chaotically jumping, almost as if this forthcoming “Cambrian explosion,” bathed with the state of the art applied will change everything.

Friedrich Wilhelm Nietzsche, the German philosopher, reminds one, “It is our future that lays down the law of our work.” While Churchill tells us, “the empires of the future belong to the [prepared] mind.”

Last night I was reading the text book “Wikinomics.” Authors say that in the next 50 years applied science will be much more evolved than that of the past 400 years. To me, and because of my other reaserch, they are quite conservative. Vernor Vinge, the professor of mathematics, recalls us about the “Singularity,” primarily technological and secondarily social (with humans that are BIO and non BIO and derivatives of the two latter, i.e. in vivo + in silico + in quantum + in non spiritus). Prof. Vinge was invited by NASA on that occasion. If one like to check it out, Google it.

Clearly, Quality Assurance progress has been made by Deming, Juran, Six Sigma, Kaisen (Toyota) and others. I would pay strong attention to their respective prescriptions with an OPEN MIND. Why? Because SYSTEMS are extremely dynamically these days, starting up with the Universe (or “Multiverse”). As I operate with risks and strategies –beyond the view of (a) strategic planner, and (b) practitioner of management best practices à la non ad hoc “project management,” I have to take advantage of many other methodologies.

The compilation of approaches is fun though must be extremely cohesive, congruent, and efficacious.

And if the economy grows more complex, many more methodologies I will grab. I have one of my own that I called “Transformative Risk Management,” highly based on the breakthrough by Military-Industrial (-Technological) Complex. Chiefly, with the people concerned with nascent NASA (Mercury, Saturn, Apollo) via Dr. Wernher von Braun, then engineer in chief. Fortunately, my mentor, a “doctor in science” for thirteen years was von Braun’s risk manager. He’s now my supervisor.

The Military-Industrial (-Technological) Complex had a great deal of challenges back in 1950. As a result, many breakthroughs were brought about. Today, not everyone seems to know and/or institute these findings. Some do as ExxonMobil. The text book “Powerful Times” attributes to U.S. defense budget a nearly 50% of the totality of the worldwide defense budgets. What do they do with this kind of money? They instill it –to a great extent- to R&D labs of prime quality. Afterwards, they shared “initiatives” with R&D labs from Universities, Global Corporations, and “Wiki” Communities. Imagine?

In addition, the grandfather of in-depth risk analyses is one that goes under many names beside Hazard Mode and Effect Analysis (HMEA). It has also been called Reliability Analysis for Preliminary Design (RAPD), Failure Mode and Effect Analysis (FMEA), Failure Mode, Effect, and Criticality Analysis (FMECA), and Fault Hazard Analysis (FHA). All of these – just to give an example – has to be included in your methodical toolkit alongside with Deming’, Juran’, Six Sigma, Kaisen’s.

These fellow manage with what they called “the omniscience perspective,” that is, the totality of knowledge. Believe me, they do mean it.

Yes, hard-working, but knowing what you’re doing and thinking always in the unthinkable, being a foresight-er, and assimilating documented “lesson learned” from previous flaws. In the mean time, Sir Francis Bacon wrote, “He that will not apply remedies must expect new evils; for time is the greatest innovator.”

(*) A "killer" to "common sense" activist. A blessing to rampantly unconventional- wisdom practitioner.

For the “crying” one, everything has changed. It has changed (i) CHANGE, (ii) Time, (iii) Politics/Geopolitics, (iv) Science and technology (applied), (v) Economy, (vi) Environment (amplest meaning), (vii) Zeitgeist (spirit of times), (viii) Weltstanchaung (conception of the world), (ix) Zeitgeist-Weltstanchaung’s Prolific Interaction, etc. So there is no need to worry, since NOW, —and everyday forever (kind of...)—there will be a different world, clearly if one looks into the sub-atomic granularity of (zillion) details. Unless you are a historian, there is no need to speak of PAST, PRESENT, FUTURE, JUST TALK ABOUT THE ENDLESSLY PERENNIAL PROGRESSION. Let’s learn a difficult lesson easily NOW.

“Study the science of art. Study the art of science. Picture mentally… Draw experientially. Succeed through endless experimentation… It’s recommendable to recall that common sense is much more than an immense society of hard-earned practical ideas—of multitudes of life-learned rules and tendencies, balances and checks. Common sense is not just one (1), neither is, in any way, simple.” (Andres Agostini)

Dwight D. Eisenhower, speaking of leadership, said: “The supreme quality for leadership is unquestionably integrity. Without it, no real success is possible, no matter whether it is on a section gang, a football field, in an army, or in an office.”

“…to a level of process excellence that will produce (as per GE’s product standards) fewer than four defects per million operations…” — Jack Welch (1998).

In addition to WORKING HARD and taking your “hard working” as you beloved HOBBY and never as a burden, one may wish to institute, as well, the following:

1.- Servitize.

2.- Productize.

3.- Webify.

4.- Outsource (strategically “cross” sourcing).

5.- Relate your core business to “molutech” (molecular technology).

Search four primary goals (in case a reader is interested):

A.- To build trust.

B.- To empower employees.

C.- To eliminate unnecessary work.

D.- To create a new paradigm for your business enterprise, a [beyond] “boundaryless” organization.

E.- Surf dogmas; evade sectarian doctrines.

Posted by Andres Agostini at February 27, 2008 7:54 PM

On the Future of Quality !!!

"Excellence is important. To everyone excellence means something a bit different. Do we need a metric for excellence? But, Why do I believe that the qualitative side of it is more important than its numericalization. By the way, increasing tsunamis of vanguard sciences and corresponding technologies to be applied bring about the upping of the technical parlance.

These times as Peter Schwartz would firmly recommend require to “pay” the highest premium for leading knowledge.

“Chindia” (China and India) will not wait for the West. People like Ballmer (Microsoft) and Ray Kurzweil insist that current levels of complexity –that one can manage appropriately and timely- might get one a nice business success.

Yes, simple is beautiful, but horrendous when this COSMOS is overwhelmed with paradoxes, contradictions, and predicaments. And you must act to capture success and, overall, to make sustainable.

Quality is crucial. Benchmarks are important but refer to something else, though similar. But Quality standards, as per my view, would require a discipline to be named “Systems Quality Assurance.” None wishes defects/waste.

But having on my hat and vest of strategy and risk management, the ultimate best practices of quality –in many settings- will not suffice. Got it add, (a) Systems Security, (b) Systems Safety, (c) Systems Reliability, (d) Systems Strategic Planning/Management and a long “so forth.”

When this age of changed CHANGE is so complex like never ever –and getting increasingly more so- just being truly excellent require, without a fail, many more approaches and stamina."

Posted by Andres Agostini at February 22, 2008 9:18 PM

Posted by Andres Agostini on This I Believe! (AATIB) at 6:25 PM 0 comments

Labels: www.AndresAgostini.blogspot.com, www.andybelieves.blogspot.com

Commenting on the Future of Quality….

Excellence is important. To everyone excellence means something a bit different. Do we need a metric for excellence? But, Why do I believe that the qualitative side of it is more important than its numericalization. By the way, increasing tsunamis of vanguard sciences and corresponding technologies to be applied bring about the upping of the technical parlance.

These times as Peter Schwartz would firmly recommend require to “pay” the highest premium for leading knowledge.

“Chindia” (China and India) will not wait for the West. People like Ballmer (Microsoft) and Ray Kurzweil insist that current levels of complexity –that one can manage appropriately and timely- might get one a nice business success.

Yes, simple is beautiful, but horrendous when this COSMOS is overwhelmed with paradoxes, contradictions, and predicaments. And you must act to capture success and, overall, to make it sustainable and fiscally sound.

Quality is crucial. Benchmarks are important but refer to something else, though similar. But Quality standards, as per my view, would require a discipline to be named “Systems Quality Assurance.” None wishes defects/waste.

But having on my hat and vest of strategy and risk management, the ultimate best practices of quality –in many settings- will not suffice. Got it add, (a) Systems Security, (b) Systems Safety, (c) Systems Reliability, (d) Systems Strategic Planning/Management and a long “so forth.”

When this age of changed CHANGE is so complex like never ever –and getting increasingly more so- just being truly excellent require, without a fail, many more approaches and stamina.

Posted by Andres Agostini at February 22, 2008 9:18 PM

Posted by Andres Agostini on This I Believe! (AATIB) at 6:36 PM 0 comments

Labels: www.AndresAgostini.blogspot.com, www.andybelieves.blogspot.com

Comments: Hard Work Matters

"Clearly, hard work is extremely important. There is a grave lack of practices of this work philosophy in the battlefield. Practicing, practicing and practicing is immeasurably relevant.

Experience accumulated throughout the years is also crucial, particularly when one is always seeking mind-expansion activities.

With it practical knowledge comes along. When consulting and training, yes, you’re offering ideas to PRESENT clients with CHOICES/OPTIONS to SOLUTIONS.

How to communicate with the client is extremely difficult. Nowadays, some technical solutions that the consultant or advisor must implement has a depth that will shock the client unless there is a careful and prudent preparation/orientation of the targeted audience.

Getting to know the company culture is another sine qua non. The personal cosmology of each executive or staff involved on behalf of the client is even more important. Likewise, the professional service expert must do likewise with the CEO, and Chairman.

In fact, in your notes, a serious consultant must have an unofficial, psychological profile of the client representatives. One has to communicate unambiguously, but sometimes helps to adapt your lexicon to that of the designated client.

From interview one –paying strong attention and listening up to the customer– the advisor must give choices while at always being EDUCATIONAL, INFORMATIVE, and, somehow, FORMATIVE/INDUCTIVE. That’s the problem.

These times are not those. When the third party possesses the knowledge, skill, know-how, technology, he/she now must work much more in ascertaining you lock in your customer’s mind and heart with yours.

Before starting the CONSULTING EFFORT, I personally like to have a couple of informal meetings just to listen up and listen up.

Then, I forewarn them that I will be making a great number of questions. Afterwards, I take extensive notes and start crafting the strategy to build up rapport with this customer.

Taking all the information given informally in advance by the client, I make an oral presentation to assure I understood what the problem is. I also take this opportunity to capture further information and to relax everyone, while trying to win them over legitimately and transparently.

Then, if I see, for instance, that they do not know how to call/express lucidly/with accurateness their problem, I ask questions. But I also offer real-life examples of these probable problems with others clients.

The opportunity is absolutely vital to gauge the level of competency of the customer and knowledge or lack of knowledge about the issue. Passing all of that over, I start, informally, speaking of options to get the customer involved in peaking out the CHOICE (the solution) to watch for initial client’s reactions.

In my case and in many times, I must not only transfer the approaches/skills/technologies, but also institute and sustain it to the 150% satisfaction of my clients.

Those of us, involved with Systems Risk Management(*) (“Transformative Risk Management”) and Corporate Strategy are obliged to scan around for problems, defects, process waste, failure, etc. WITH FORESIGHT.

Once that is done and still “on guard,” I can highlight the opportunity (upside risk) to the client.

Notwithstanding, once you already know your threats, vulnerabilities, hazards, and risks (and you have a master risk plan, equally contemplated in your business plan), YOU MUST BE CREATIVE SO THAT “HARD WORK” MAKES A UNIQUE DIFFERENCE IN YOUR INDUSTRY.

While at practicing, do so a zillion low-cost experiments. Do a universe of Trial and Errors. Commit to serendipity and/or pseudo-serendipity. In the mean time, and as former UK Prime Minister Tony Blair says: “EDUCATION, EDUCATION, EDUCATION.”

(*) It does not refer at all to insurance, co-insurance, reinsurance. It is more about the multidimensional, cross-functional management of business processes to be goals and objectives compliant."

Posted by Andres Agostini at February 23, 2008 4:56 PM

Posted by Andres Agostini on This I Believe! (AATIB) at 1:58 PM 0 comments

Labels: www.AndyBelieves.blogspot.com/

Future Shape of Quality

“I like the dreams of the future better than the history of the past.” (Jefferson). In a world –once called the “society of knowledge”- that is getting (society, economics, [geo] politics, technology, environment, so forth) more and more sophisticated in over-exponential rates. Ray Kurzweil in “The Singularity is Near” assures that, mathematically speaking, the base and the exponent of the power are increasingly chaotically jumping, almost as if this forthcoming “Cambrian explosion,” bathed with the state of the art applied will change everything.

Friedrich Wilhelm Nietzsche, the German philosopher, reminds one, “It is our future that lays down the law of our work.” While Churchill tells us, “the empires of the future belong to the [prepared] mind.”

Last night I was reading the text book “Wikinomics.” Authors say that in the next 50 years applied science will be much more evolved than that of the past 400 years. To me, and because of my other reaserch, they are quite conservative. Vernor Vinge, the professor of mathematics, recalls us about the “Singularity,” primarily technological and secondarily social (with humans that are BIO and non BIO and derivatives of the two latter, i.e. in vivo + in silico + in quantum + in non spiritus). Prof. Vinge was invited by NASA on that occasion. If one like to check it out, Google it.

Clearly, Quality Assurance progress has been made by Deming, Juran, Six Sigma, Kaisen (Toyota) and others. I would pay strong attention to their respective prescriptions with an OPEN MIND. Why? Because SYSTEMS are extremely dynamically these days, starting up with the Universe (or “Multiverse”). As I operate with risks and strategies –beyond the view of (a) strategic planner, and (b) practitioner of management best practices à la non ad hoc “project management,” I have to take advantage of many other methodologies.

The compilation of approaches is fun though must be extremely cohesive, congruent, and efficacious.

And if the economy grows more complex, many more methodologies I will grab. I have one of my own that I called “Transformative Risk Management,” highly based on the breakthrough by Military-Industrial (-Technological) Complex. Chiefly, with the people concerned with nascent NASA (Mercury, Saturn, Apollo) via Dr. Wernher von Braun, then engineer in chief. Fortunately, my mentor, a “doctor in science” for thirteen years was von Braun’s risk manager. He’s now my supervisor.

The Military-Industrial (-Technological) Complex had a great deal of challenges back in 1950. As a result, many breakthroughs were brought about. Today, not everyone seems to know and/or institute these findings. Some do as ExxonMobil. The text book “Powerful Times” attributes to U.S. defense budget a nearly 50% of the totality of the worldwide defense budgets. What do they do with this kind of money? They instill it –to a great extent- to R&D labs of prime quality. Afterwards, they shared “initiatives” with R&D labs from Universities, Global Corporations, and “Wiki” Communities. Imagine?

In addition, the grandfather of in-depth risk analyses is one that goes under many names beside Hazard Mode and Effect Analysis (HMEA). It has also been called Reliability Analysis for Preliminary Design (RAPD), Failure Mode and Effect Analysis (FMEA), Failure Mode, Effect, and Criticality Analysis (FMECA), and Fault Hazard Analysis (FHA). All of these – just to give an example – has to be included in your methodical toolkit alongside with Deming’, Juran’, Six Sigma, Kaisen’s.

These fellow manage with what they called “the omniscience perspective,” that is, the totality of knowledge. Believe me, they do mean it.

Yes, hard-working, but knowing what you’re doing and thinking always in the unthinkable, being a foresight-er, and assimilating documented “lesson learned” from previous flaws. In the mean time, Sir Francis Bacon wrote, “He that will not apply remedies must expect new evils; for time is the greatest innovator.”

(*) A "killer" to "common sense" activist. A blessing to rampantly unconventional- wisdom practitioner.

For the “crying” one, everything has changed. It has changed (i) CHANGE, (ii) Time, (iii) Politics/Geopolitics, (iv) Science and technology (applied), (v) Economy, (vi) Environment (amplest meaning), (vii) Zeitgeist (spirit of times), (viii) Weltstanchaung (conception of the world), (ix) Zeitgeist-Weltstanchaung’s Prolific Interaction, etc. So there is no need to worry, since NOW, —and everyday forever (kind of...)—there will be a different world, clearly if one looks into the sub-atomic granularity of (zillion) details. Unless you are a historian, there is no need to speak of PAST, PRESENT, FUTURE, JUST TALK ABOUT THE ENDLESSLY PERENNIAL PROGRESSION. Let’s learn a difficult lesson easily NOW.

“Study the science of art. Study the art of science. Picture mentally… Draw experientially. Succeed through endless experimentation… It’s recommendable to recall that common sense is much more than an immense society of hard-earned practical ideas—of multitudes of life-learned rules and tendencies, balances and checks. Common sense is not just one (1), neither is, in any way, simple.” (Andres Agostini)

Dwight D. Eisenhower, speaking of leadership, said: “The supreme quality for leadership is unquestionably integrity. Without it, no real success is possible, no matter whether it is on a section gang, a football field, in an army, or in an office.”

“…to a level of process excellence that will produce (as per GE’s product standards) fewer than four defects per million operations…” — Jack Welch (1998).

In addition to WORKING HARD and taking your “hard working” as you beloved HOBBY and never as a burden, one may wish to institute, as well, the following:

1.- Servitize.

2.- Productize.

3.- Webify.

4.- Outsource (strategically “cross” sourcing).

5.- Relate your core business to “molutech” (molecular technology).

Search four primary goals (in case a reader is interested):

A.- To build trust.

B.- To empower employees.

C.- To eliminate unnecessary work.

D.- To create a new paradigm for your business enterprise, a [beyond] “boundaryless” organization.

E.- Surf dogmas; evade sectarian doctrines.

Posted by Andres Agostini at February 27, 2008 7:54 PM

Comments: Snide Advertising

Advertising and campaigning must enforce a strong strategic alliance with the client. The objective is to COMMUNICATE the firm’s products, services, values, ethos in a transparent and accountable way. Zero distortion tolerance as to the messages disseminated.

Ad agencies cannot make up for the shortcomings of the business enterprise. Those shortcomings consequential of a core business sup-optimally managed. Get the business optimum first. Then, communicate it clearly, being sensible to the community at large.

A funny piece is one thing. To make fun of others is another (terrible). To be creative in the message is highly desirable. If the incumbent’s corporation has unique attributes and does great business, just say it comprehensibly without manipulating or over-promising.

Some day soon the subject matter on VALUES is going to be more than indispensable to keep global society alive. The rampant violations of the aforementioned values should be death-to-life matter of study by ad agencies without a fail.

The global climate change, the flu pandemia (to be), the geology (earthquakes, volcanoes, tsunamis), large meteorites, nuke wars are all among the existential risks. To get matters worse, value violations by the ad agencies, mass media, and the rest of the economy would easily qualify as an existential risk.

Humankind requires transparency and accountability the soonest.

Posted by Andres Agostini at February 27, 2008 8:34 PM

Comments: Where's the WOW?

Talent is absolutely a sine qua non. Nowadays – while at the over-revolution of knowledge– is even more important, in fact without precedent.

When I went to college to sign up for diverse courses, my counselor forewarned me that my education (about to commence) would have a validity (not be outdated) for the first five years of completion. I got the message clearly and have never stop to attempt to better myself.

I mentioned the above because I know many people with doctoral degrees from Harvard, Oxford, MIT that, once completed their studies, don’t read anything more that ambiguous news headlines. They think that the economy is a snapshot (static) and, therefore, not making quantum progresses. Today sci-fi has been superseded by the world-class news media alone.

Likewise, many company and countries captains worship mediocrity. It’s unbelievable how universal this is, beginning with the most advanced nations. Friedman tells his siblings that they had better study not to give away their jobs to people from China and India and Russia.

In the mean time, knowledge repository is growing to ruthless proportions. The direct consequence is for economy to get more and more automated with more and more Artificial Intelligence. I wonder if the people from China and India and Russia will give away their jobs to Asimo and other robots (now in the womb).

Should we expect “WOW” from the forthcoming robots since the subjects of mediocrity-dom are accelerating the automation described?

In 1970 a fellow by the name of Alvin Toffler in a book titled “Shock of the Future” told us many things to get prepared for in advance. How many have pay attention? Those who are not interested in the granularity (atomic/sub-atomic scale) of details and have paid no heed cannot complaint. Get ready!

Posted by Andres Agostini at February 27, 2008 9:01 PM

Comments: Future Shape of Quality

Thank you all for your great contributions and insightfulness. Take a Quality Assurance Program, (e.g.), to be instituted in a company these days, century 2008. One will have to go through tremendous amounts of reading, writing, drawing, spread-sheeting, etc. Since the global village is the Society of Knowledge, these days, to abate exponential complexity, you must not only have to embrace it fully, you have to be thorough at all times to meet the challenge. One must also pay the price of an advanced global economy that is in increasingly perpetual innovation. Da Vinci, in a list of the 10 greatest minds, was # 1. Einstein was # 10. Subsequently, it’s highly recommendable, if one might wish, to pay attention to “Everything should be made as simple [from the scientific stance] as possible, but not simpler.” ¬Albert Einstein. Mr. Peters, on the other hand, has always stressed the significance to continuously disseminate new ideas. He is really making an unprecedented effort in that direction. Another premium to pay, it seems to be extremely “thorough” (Trump).

Posted by Andres Agostini at February 28, 2008 3:11 PM

Comments: Cool Friend: C. Michael Hiam

We need, globally, to get into the “strongest” peaceful mind-set the soonest. Not getting to peace status via waging wars. Sometimes, experts and statesmen may require “chirurgical interventions,” especially under the monitoring of the U.N. diplomacy are called to be reinvented and taken to the highest possible state of refinement. More and more diplomacy and more and more refinement. Then, universal and aggressive enhance diplomacy instituted.

Posted by Andres Agostini at February 29, 2008 4:02 PM

Comments: Success Tips at ChangeThis

Comments: Success Tips at ChangeThis

I appreciate current contributions. I’d like to think that the nearly impossible is in you way (while you’re emphatically self-driven for accomplishments) with determined aggressive towards the ends (objectives, goals) to be met. Churchill offers a great deal of examples of how an extraordinary leader works out.

Many lessons to be drawn out from him, without a doubt. Churchill reminds, as many others, that (scientific) knowledge is power. Napoleon, incidentally, says that a high-school (lyceum) graduate, must study science and English (lingua franca).

So, the “soft knowledge” (values) plus the “hard knowledge” (science, technology) must converge into the leader (true statesman). Being updated in values and science and technology in century 21 –to be en route to being 99% success compliant- requires, as well, of an open mind (extremely self-critical) that is well prepared (Pasteur).

Posted by Andres Agostini at February 29, 2008 4:19 PM

Comments: Wiki Contributions

My experience tells me that every client must be worked out to be your true ally. When you’re selling high-tech/novel technologies/products/services, one must do a lot of talking to induce the customer into a menu of probable solutions. The more the complications, the more the nice talk with unambiguous language.

If that phase succeeds, it’s necessary to make oral/document presentations to the targeted client. Giving him – while at it- a number of unimpeachable examples of the real life (industry by industry) will get the customer more to envision you as an ally than just a provider.

These continuous presentations are, of course, training/indoctrination to the customer, so that he understands better his problem and the breadth and scope of the likely solutions. If progress is made in this phase, one can start working out, very informally and distensibly, the clauses of the contract, particularly those that are daring. One by one.

When each one is finally approved by both. Assemble and get approved and implemented the corresponding contract. Then, keep a close (in-person) contact with your customer.

Posted by Andres Agostini at February 29, 2008 4:32 PM

Comments: It's Good to Talk!

I like to meet personally and working together with my peers. So, I can also work through the Web as I am on my own with added benefits of some privacy and other conveniences. A mix of both –as I think- is optimal.

How can one slow down the global economy trends? The more technological elapsed time get us, the more connected and wiki will we all be. Most of the interactions I see/experience on the virtual world with extreme consequences in the real world.

I think it’s nice and productive to exchange ideas over a cappuccino. The personal contact is nice. Though, it gets better where is less frequent. So, when it happens, the person met becomes a splendid occasion.

As things get more automated, so will get we. I, as none of you, invented the world. Automations will get to work more than machines. Sometimes, it of a huge help to get an emotional issue ventilated through calm, discerned e-mails.

Regardless of keeping on embracing connectedness (which I highly like), I would say one must make in-person meetings a must-do. Let's recall that we are en route to Vernor Vinge's "Singularity."

Posted by Andres Agostini at February 29, 2008 4:46 PM

Comments: A Focus on Talent

The prescription to make a true talent as per the present standards is diverse. Within the ten most important geniuses, there is Churchill again. He is the (political) statesman # 1, from da Vinci’s times to the current moment. In one book (Last Lion), it is attributed to Churchill saying that a New Yorker –back then–transferred him some methodology to capture geniality.

A great deal of schooling is crucial. A great deal of self-schooling is even more vital. Being experienced in different tenures and with different industries and with different clients helps beyond belief.

Study/researching cross-reference (across the perspective of omniscience) helps even more. Seeking mentors and tutors helps. Get trained/indoctrinated in various fields does so too. Hiring consultants for your personal, individual induction/orientation add much.

Got it have an open mind with a gusto for multidimensionality and cross-functionality, harnessing and remembering useful knowledge all over, regardless of the context. I have worked on these and published some “success metaphors” in the Web, both text and video. Want it? Google it!

Learning different (even opposed) methodologies renders the combined advantages of all of the latter into a own, unique multi-approach of yours.

Most of these ideas can be marshaled concurrently.

Posted by Andres Agostini at February 29, 2008 5:11 PM

NAPOLEON ON EDUCATION:

(Literally. Brackets are placed by Andres Agostini.

Content researched by Andres Agostini)

“….Education, strictly speaking, has several objectives: one needs to learn how to speak and write correctly, which is generally called grammar and belles lettres [fines literature of that time]. Each lyceum [high school] has provided for this ob­ject, and there is no well-educated man who has not learned his rhetoric.

After the need to speak and write correctly [accurately and unambiguously] comes the ability to count and measure [skillful at mathematics, physics, quantum mechanics, etc.]. The lyceums have provided this with classes in mathematics embracing arithmetical and mechanical knowledge [classic physics plus quantum mechanics] in their different branches.

The elements of several other fields come next: chronology [timing, tempo, in-flux epochs], ge­ography [geopolitics plus geology plus atmospheric weather], and the rudiments of history are also a part of the educa­tion [sine qua non catalyzer to surf the Intensively-driven Knowledge Economy] of the lyceum. . . .

A young man [a starting, independent entrepreneur] who leaves the lyceum at sixteen years of age therefore knows not only the mechanics of his language and the classical authors [captain of the classic, great wars plus those into philosophy and theology], the divisions of discourse [the structure of documented oral presentations], the different figures of eloquence, the means of employing them either to calm or to arouse passions, in short, everything that one learns in a course on belles lettres.

He also would know the principal epochs of history, the basic geographical divisions, and how to compute and measure [dexterity with information technology, informatics, and telematics]. He has some general idea of the most striking natural phenomena [ambiguity, ambivalence, paradoxes, contradictions, paradigm shits, predicaments, perpetual innovation, so forth] and the principles of equilibrium and movement both [corporate strategy and risk-managing of kinetic energy transformation pertaining to the physical world] with regard to solids and fluids.

Whether he desires to follow the career of the barrister, that of the sword [actual, scientific war waging in the frame of reference of work competition], OR ENGLISH [CENTURY-21 LINGUA FRANCA, MORE-THAN-VITAL TOOL TO ACCESS BASIC THROUGH COMPLEX SCIENCE], or letters; if he is destined to enter into the body of scholars [truest womb-to-tomb managers, pundits, experts, specialists, generalists], to be a geographer, engineer, or land surveyor—in all these cases he has received a general education [strongly dexterous of two to three established disciplines plus a background of a multitude of diverse disciplines from the exact sciences, social sciences, etc.] necessary to become equipped [talented] to receive the remainder of instruction [duly, on-going-ly indoctrinated to meet the thinkable and unthinkable challenges/responsibilities beyond his boldest imagination, indeed] that his [forever-changing, increasingly so] circumstances require, and it is at this moment [of extreme criticality for humankind survival], when he must make his choice of a profession, that the special studies [omnimode, applied with the real-time perspective of the totality of knowledge] science present themselves.

If he wishes to devote himself to the military art, engineering, or artillery, he enters a special school of mathematics [quantum information sciences], the polytechnique. What he learns there is only the corollary of what he has learned in elementary mathematics, but the knowledge acquired in these studies must be developed and applied before he enters the dif­ferent branches of abstract mathematics. No longer is it a question simply of education [and mind’s duly formation/shaping], as in the lyceum: NOW IT BECOMES A MATTER OF ACQUIRING A SCIENCE....”

END OF TRANSCRIPTION.

Posted by Andres Agostini on This I Believe! (AATIB) at 10:30 PM 0

Posted by Andres Agostini on This I Believe! (AATIB) at 11:52 AM 0 comments
Labels: www.AndresAgostini.blogspot.com/, www.AndyBelieves.blogspot.com/, www.geocities.com/agosbio/a.html
Viewing how to succeed:

video
Posted by Andres Agostini on This I Believe! (AATIB) at 10:23 AM 0 comments
Labels: www.AndresAgostini.blogspot.com/, www.AndyBelieves.blogspot.com/, www.geocities.com/agosbio/a.html
Andres Agostini's (Arlington, Virginia, USA)
Posted by Andres Agostini on This I Believe! (AATIB) at 10:22 AM 0 comments
Andres Agostini's (Arlington, Virginia, USA)
Posted by Andres Agostini on This I Believe! (AATIB) at 10:22 AM 0 comments
Older Posts
Subscribe to: Posts (Atom)
E-mail Andy...
AgosDres@yahoo.com









WebCrawler Search



Andy Agostini'sVideos
Loading...
Objective!

To disseminate new ideas, hypothesis, thesis, original thinking, new proposals to reinvent theory pertaining to Strategy, Innovation, Performance,Risk (all kinds), via Scientific and Highly-Sophisticated Management, in accordance with the perspective of applied omniscience (the perspective of totality of knowledge). Put simply, to research an analyze news ways to optimize the best practices to an optimum degree.
Where is Andy?

* http://www.geocities.com/seekingandresagostini/1.html

Andy on The Science Statement…


The American Heritage® Dictionary of the English Language, Fourth Edition, about “science” refers: “…THE OBSERVATION, IDENTIFICATION, DESCRIPTION, EXPERIMENTAL INVESTIGATION, and theoretical explanation of phenomena…Such activities restricted to a class of natural phenomena…SUCH ACTIVITIES APPLIED TO AN OBJECT OF INQUIRY OR STUDY… METHODOLOGICAL ACTIVITY, DISCIPLINE, OR STUDY…AN ACTIVITY THAT APPEARS TO REQUIRE STUDY AND METHOD…KNOWLEDGE, ESPECIALLY THAT GAINED THROUGH EXPERIENCE….”

Although I do not have a diploma to claim to be a scientist, I must state that, out the above definition, the upper-cased phrases in the definition do apply to me.

I have been surrounded all my life for some of the most challenging entrepreneurs in the world. I have been lucky. Many of them are from the U.S., U.K., Japan, Canada, Spain, Brazil, European Union, etc.

Since 1996, I have mentors and tutors and supervisors and colleagues from the hardest core of the scientific arena. I have been blessed. I have a thirst for scientific knowledge beyond the boldest dreams. And I will marshal, on the doubles, all my way to capture more and more of the avant-garde state of the art at any cost and forever.

Fine arts are a way to scan around for knowledge. Science, and everyone is a scientist documented or undocumented, is another way to capture knowledge, skill, competencies, insights, etc.

I respect all occupations and professions, especially which of those consummated scientists. Who knows? Someday I may tender a little gift, from my utmost stubbornness, recursive, forever search, to humankind.

In the mean time, my in-depth research, analyses, consultancy, e-publishing, and blogging will carry on with a Davincian mind and a Einstenian, a la “gendaken”, brain, that is, if I may. Yes, I will and without a fail.

More information on Andy at his BIO.
Video by Andy (April 03, 2008)
Andres Agostini's Multiverse Office, Arlington, Virginia, USA

* http://www.agostinimultiverse.blogspot.com/

From Einstein....
From Einstein....
Albert Einstein, “the whole of science is nothing more than a [perpetual] refinement of everyday thinking.”
Andy's Wrist Watch Gives the Time...

* http://www.time.gov/timezone.cgi?Eastern/d/-5

Definition of "Transformative Risk Management"

* http://transriskmanagement.blogspot.com/

Who is Andy Agostini? ...


WHO IS ANDY AGOSTINI?

“Put simply, an inspired, determined soul, with an audacious style of ingrained womb-to-tomb thinking from the monarchy of originality, who starvingly seeks and seeks and seeks —in real-time—the yet unimagined futures in diverse ways, contexts, and approaches, originated in the FUTURE. A knowledge-based, pervasive rebellious, ‘type A Prima Donna’, born out of extraterrestrial protoplasm, who is on a rampant mission to (cross) research science (state of the art from the avant-garde) progressively, envision, and capture a breakthrough foresight of what is/what might be/what should be, still to come while he marshals his ever-practicing, inquisitive future-driven scenarios, via his Lines of Practice and from the intertwined, intersected, chaotically frenzy stances that combine both subtlety and brute force with the until now overwhelmingly unthinkable.”

Andres Agostini

www.AndyBelieves.blogspot.com

11:10 p.m. (GMT / UTC)

Monday, March 24, 2008

A Video by Andy...

(more)
MySpace...

* http://www.myspace.com/aagostini

GOOGLE ! !!

* http://www.google.com

Contact Andy...
AndresAgostini@gmail.com
GET / Log into a YAHOO Account….

* https://login.yahoo.com/config/login_verify2?&.src=ym

GET / Log into a HOTMAIL Account….

* http://login.live.com/login.srf?wa=wsignin1.0&rpsnv=10&ct=1205614141&rver=4.5.2130.0&wp=MBI&wreply=http:%2F%2Fmail.live.com%2Fdefault.aspx&id=64855

GET / Log into a GMAIL Account….

* https://www.google.com/accounts/ServiceLogin?service=mail&passive=true&rm=false&continue=https%3A%2F%2Fmail.google.com%2Fmail%2F%3Fnsr%3D1%26ui%3Dhtml%26zy%3Dl<mpl=default<mplcache=2

Add to del.icio.us ....

* https://secure.del.icio.us/login?url=http%3A%2F%2Fpodcasts.cnn.net%2Fcnn%2Fservices%2Fpodcasting%2Flkl%2Fvideo%2F2008%2F02%2F21%2Flkl.podcast.jon.stewart.cnn.m4v&title=Jon%20Stewart&partner=fb&v=4

AGOSTINI HOME WEBSITE ....

* http://agosblogs.blogspot.com/

It 's about ...
It \
www.AndyBelieves.blogspot.com
A Singularitarian into Original Thinking...
A Singularitarian into Original Thinking...
Andres Agostini - Arlignton, Virginia, USA
Andy's Comment to an E-Survey by BBC World:

We are living in extreme times. As Global Risk Manager and Scenario Strategists I know we have the technology and science to solve many existential risks. The problem is that the world is over-populated by –as it seems- a majority of psycho-stable people. For the immeasurable challenges we need to face and act upon them, we will require a majority of extremely educated (exact sciences) people who are psycho-kinetic minded. People who have an unlimited drive to do things optimally, that are visionaries. That will go all the way to make peace universal and so the best maintenance of ecology. One life-to-death risk is a nuclear war. There are too many alleged statesmen willing to pull to switch to quench their mediocre egos. If we can manage systematically, systematically, and holistically the existential risks (including the ruthless progression of science and technology), the world (including some extra-Erath stations) a promissory place. The powers and the superpowers must all “pull” at the unison to mitigate/eliminate these extraordinarily grave risks.

Andres Agostini

www.AndyBelieves.blogspot.com/

Arlington, Virginia, USA

9:32 p.m. GMT/UCT

March 14, 2008
Einstein against Jurasic Common Sense...
Einstein against Jurasic Common Sense...
Andres Agostini (Ich Bin Singularitarian!)
Blogging at Tom Peters' ...

* http://search.yahoo.com/search;_ylt=A0geu7AWztpHRAEAWipXNyoA?p=%E2%80%9CDispatches+from+the+New+World+of+Work%E2%80%9D+%E2%80%9Candres+agostini%E2%80%9D&y=Search&fr=yfp-t-501&ei=UTF-8

Video Bar
Loading...
Word of the Day

* smug - Jul 22, 2008
* callous - Jul 21, 2008
* imitative - Jul 20, 2008
* gullible - Jul 19, 2008
* laic - Jul 19, 2008

Present Versus Future by Andres Agostini
Andy's Biz Card...
Andy\
Andres Agostini, Arlington, Virginia, USA
Andy's Official Time

* http://www.time.gov/timezone.cgi?Eastern/d/-5/java

CNN.com

* Rice presses N. Korea on nuke program - Jul 23, 2008
* Blaze forces evacuations near Greek capital - Jul 23, 2008
* Nigerian militants: We'll destroy oil pipelines - Jul 23, 2008
* Man dies after being shot with Taser 9 times - Jul 23, 2008
* China to set up Olympic protest zones - Jul 23, 2008

CNN.com - Science and Space

* Glitch delays Phoenix's work on Mars - Jun 19, 2008
* Planets make case for 'crowded universe' - Jun 17, 2008
* White specks puzzle Mars team - Jun 17, 2008
* Shuttle back with 'beautiful landing' - Jun 14, 2008
* NASA identifies shiny object trailing shuttle - Jun 13, 2008

CNN.com - Technology

* Rivals embracing wireless hi-def video - Jul 23, 2008
* Avoid those awkward cell-phone moments - Jul 22, 2008
* MySpace to join rivals in sharing log-ins - Jul 22, 2008
* Ancient Bible to be made whole online - Jul 22, 2008
* Pickens: We're in energy 'crisis mode' - Jul 22, 2008

CNN.com - Business

* Toyota threatens GM's 77-year sales run - Jul 23, 2008
* Oil prices continue to tumble - Jul 23, 2008
* Centrica eyes controlling stake in Belgium's SPE - Jul 23, 2008
* Paulson urges foreclosure rescue - Jul 22, 2008
* Peugeot-Citroen net profit rises - Jul 23, 2008

World's Untold Stories

* Dangerous Ground - Jul 16, 2008 - World Untold Stories Producer
* Trial of a Child Denied - Jul 7, 2008 - World Untold Stories Producer
* PNG Babies - Jun 23, 2008 - World Untold Stories Producer
* Darfur Crisis - Apr 11, 2008 - World Untold Stories Producer
* The Coldest Winter - May 26, 2008 - World Untold Stories Producer

CNNI Richard Quest

* Off to meet Santa - Dec 3, 2007 - Richard Quest
* End of a trip - Nov 15, 2007 - Richard Quest
* Peninsula's latest - Nov 14, 2007 - Richard Quest
* An impressive beast - Oct 30, 2007 - Richard Quest
* Off to Asia - Oct 15, 2007 - Richard Quest

Deutsche Welle: DW-WORLD.DE - Germany

* German Court Bolsters Rights of Transssexuals - Jul 23, 2008
* Anticipation, Griping Increases Before Obama Speech in Berlin - Jul 23, 2008
* Less Space at Oktoberfest, but More Expensive - Jul 23, 2008
* German Cabinet Approves Disputed ID Cards, Citizenship Test - Jul 23, 2008
* We're Open for Business, Iraqi PM Tells Germany - Jul 23, 2008

Deutsche Welle: DW-WORLD.DE - Europe

* Report Says Serbian Intelligence Protected Karadzic - Jul 23, 2008
* EU Cuts Funding to Bulgaria for Failing to Fight Organized Crime - Jul 23, 2008
* Karadzic to Appeal Extradition to UN Court - Jul 23, 2008
* Sociologist: Climate Change is a Chance to Work Together - Jul 22, 2008
* European Leaders Welcome Karadzic Arrest, Congratulate Serbia - Jul 22, 2008

Deutsche Welle: DW-WORLD.DE - Business

* EU Rejects State Aid for German Cargo Carrier DHL - Jul 23, 2008
* France's New Energy Giant Leaves Free-Market Proponents Cold - Jul 22, 2008
* Facebook Sues German Rival StudiVZ - Jul 22, 2008
* Emerging Car Markets Power Volkswagen's Global Sales - Jul 21, 2008
* Deutsche Bank Head Urges Rethink of Business Risk - Jul 18, 2008

Deutsche Welle: DW-WORLD.DE

* Iraqi PM in Berlin for trade talks - Jul 23, 2008
* Germany to introduce test for would-be citizens - Jul 23, 2008
* German pilots' strike continues to disrupt air travel - Jul 23, 2008
* Nepal's first president sworn in - Jul 23, 2008
* Sudan's Bashir defiant over ICC indictment - Jul 23, 2008

Napolen Bonaparte on Education/Formation....

NAPOLEON ON EDUCATION:

(Literally. Brackets are placed by Andres Agostini.

Content researched by Andres Agostini)

“….Education, strictly speaking, has several objectives: one needs to learn how to speak and write correctly, which is generally called grammar and belles lettres [fines literature of that time]. Each lyceum [high school] has provided for this ob­ject, and there is no well-educated man who has not learned his rhetoric.

After the need to speak and write correctly [accurately and unambiguously] comes the ability to count and measure [skillful at mathematics, physics, quantum mechanics, etc.]. The lyceums have provided this with classes in mathematics embracing arithmetical and mechanical knowledge [classic physics plus quantum mechanics] in their different branches.

The elements of several other fields come next: chronology [timing, tempo, in-flux epochs], ge­ography [geopolitics plus geology plus atmospheric weather], and the rudiments of history are also a part of the educa­tion [sine qua non catalyzer to surf the Intensively-driven Knowledge Economy] of the lyceum. . . .

A young man [a starting, independent entrepreneur] who leaves the lyceum at sixteen years of age therefore knows not only the mechanics of his language and the classical authors [captain of the classic, great wars plus those into philosophy and theology], the divisions of discourse [the structure of documented oral presentations], the different figures of eloquence, the means of employing them either to calm or to arouse passions, in short, everything that one learns in a course on belles lettres.

He also would know the principal epochs of history, the basic geographical divisions, and how to compute and measure [dexterity with information technology, informatics, and telematics]. He has some general idea of the most striking natural phenomena [ambiguity, ambivalence, paradoxes, contradictions, paradigm shits, predicaments, perpetual innovation, so forth] and the principles of equilibrium and movement both [corporate strategy and risk-managing of kinetic energy transformation pertaining to the physical world] with regard to solids and fluids.

Whether he desires to follow the career of the barrister, that of the sword [actual, scientific war waging in the frame of reference of work competition], OR ENGLISH [CENTURY-21 LINGUA FRANCA, MORE-THAN-VITAL TOOL TO ACCESS BASIC THROUGH COMPLEX SCIENCE], or letters; if he is destined to enter into the body of scholars [truest womb-to-tomb managers, pundits, experts, specialists, generalists], to be a geographer, engineer, or land surveyor—in all these cases he has received a general education [strongly dexterous of two to three established disciplines plus a background of a multitude of diverse disciplines from the exact sciences, social sciences, etc.] necessary to become equipped [talented] to receive the remainder of instruction [duly, on-going-ly indoctrinated to meet the thinkable and unthinkable challenges/responsibilities beyond his boldest imagination, indeed] that his [forever-changing, increasingly so] circumstances require, and it is at this moment [of extreme criticality for humankind survival], when he must make his choice of a profession, that the special studies [omnimode, applied with the real-time perspective of the totality of knowledge] science present themselves.

If he wishes to devote himself to the military art, engineering, or artillery, he enters a special school of mathematics [quantum information sciences], the polytechnique. What he learns there is only the corollary of what he has learned in elementary mathematics, but the knowledge acquired in these studies must be developed and applied before he enters the dif­ferent branches of abstract mathematics. No longer is it a question simply of education [and mind’s duly formation/shaping], as in the lyceum: NOW IT BECOMES A MATTER OF ACQUIRING A SCIENCE....”

END OF TRANSCRIPTION.
On "Artificial Intelligence" - As follows:

"AI" redirects here. For other uses of "AI" and "Artificial intelligence", see Ai (disambiguation).

Garry Kasparov playing against Deep Blue, the first machine to win a chess match against a reigning world champion.
Garry Kasparov playing against Deep Blue, the first machine to win a chess match against a reigning world champion.

Artificial intelligence (or AI) is both the intelligence of machines and the branch of computer science which aims to create it.

Major AI textbooks define artificial intelligence as "the study and design of intelligent agents,"[1] where an intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success.[2] AI can be seen as a realization of an abstract intelligent agent (AIA) which exhibits the functional essence of intelligence.[3] John McCarthy, who coined the term in 1956,[4] defines it as "the science and engineering of making intelligent machines."[5]

Among the traits that researchers hope machines will exhibit are reasoning, knowledge, planning, learning, communication, perception and the ability to move and manipulate objects.[6] General intelligence (or "strong AI") has not yet been achieved and is a long-term goal of AI research.[7]

AI research uses tools and insights from many fields, including computer science, psychology, philosophy, neuroscience, cognitive science, linguistics, ontology, operations research, economics, control theory, probability, optimization and logic.[8] AI research also overlaps with tasks such as robotics, control systems, scheduling, data mining, logistics, speech recognition, facial recognition and many others.[9] Other names for the field have been proposed, such as computational intelligence,[10] synthetic intelligence,[10] intelligent systems,[11] or computational rationality.[12]


Artificial intelligence Portal
Contents
[hide]

* 1 Perspectives on AI
o 1.1 AI in myth, fiction and speculation
o 1.2 History of AI research
o 1.3 Philosophy of AI
* 2 AI research
o 2.1 Problems of AI
+ 2.1.1 Deduction, reasoning, problem solving
+ 2.1.2 Knowledge representation
+ 2.1.3 Planning
+ 2.1.4 Learning
+ 2.1.5 Natural language processing
+ 2.1.6 Motion and manipulation
+ 2.1.7 Perception
+ 2.1.8 Social intelligence
+ 2.1.9 General intelligence
o 2.2 Approaches to AI
+ 2.2.1 Cybernetics and brain simulation
+ 2.2.2 Traditional symbolic AI
+ 2.2.3 Sub-symbolic AI
+ 2.2.4 Intelligent agent paradigm
+ 2.2.5 Integrating the approaches
o 2.3 Tools of AI research
+ 2.3.1 Search
+ 2.3.2 Logic
+ 2.3.3 Probabilistic methods for uncertain reasoning
+ 2.3.4 Classifiers and statistical learning methods
+ 2.3.5 Neural networks
+ 2.3.6 Social and emergent models
+ 2.3.7 Control theory
+ 2.3.8 Specialized languages
o 2.4 Evaluating artificial intelligence
o 2.5 Competitions and prizes
* 3 Applications of artificial intelligence
* 4 See also
* 5 Notes
* 6 References
o 6.1 Major AI textbooks
o 6.2 Other sources
* 7 Further reading
* 8 External links

[edit] Perspectives on AI

[edit] AI in myth, fiction and speculation

Main articles: artificial intelligence in fiction, ethics of artificial intelligence, transhumanism, and Technological singularity

Humanity has imagined in great detail the implications of thinking machines or artificial beings. They appear in Greek myths, such as Talos of Crete, the golden robots of Hephaestus and Pygmalion's Galatea.[13] The earliest known humanoid robots (or automatons) were sacred statues worshipped in Egypt and Greece, believed to have been endowed with genuine consciousness by craftsman.[14] In medieval times, alchemists such as Paracelsus claimed to have created artificial beings.[15] Realistic clockwork imitations of human beings have been built by people such as Yan Shi,[16] Hero of Alexandria,[17] Al-Jazari[18] and Wolfgang von Kempelen.[19] Pamela McCorduck observes that "artificial intelligence in one form or another is an idea that has pervaded Western intellectual history, a dream in urgent need of being realized."[20]

In modern fiction, beginning with Mary Shelley's classic Frankenstein, writers have explored the ethical issues presented by thinking machines.[21] If a machine can be created that has intelligence, can it also feel? If it can feel, does it have the same rights as a human being? This is a key issue in Frankenstein as well as in modern science fiction: for example, the film Artificial Intelligence: A.I. considers a machine in the form of a small boy which has been given the ability to feel human emotions, including, tragically, the capacity to suffer. This issue is also being considered by futurists, such as California's Institute for the Future under the name "robot rights",[22] although many critics believe that the discussion is premature.[23][24]

Science fiction writers and futurists have also speculated on the technology's potential impact on humanity. In fiction, AI has appeared as a servant (R2D2), a comrade (Lt. Commander Data), an extension to human abilities (Ghost in the Shell), a conqueror (The Matrix), a dictator (With Folded Hands) and an exterminator (Terminator, Battlestar Galactica). Some realistic potential consequences of AI are decreased labor demand,[25] the enhancement of human ability or experience,[26] and a need for redefinition of human identity and basic values.[27]

Futurists estimate the capabilities of machines using Moore's Law, which measures the relentless exponential improvement in digital technology with uncanny accuracy. Ray Kurzweil has calculated that desktop computers will have the same processing power as human brains by the year 2029, and that by 2040 artificial intelligence will reach a point where it is able to improve itself at a rate that far exceeds anything conceivable in the past, a scenario that science fiction writer Vernor Vinge named the "technological singularity".[28]

"Artificial intelligence is the next stage in evolution," Edward Fredkin said in the 1980s,[29] expressing an idea first proposed by Samuel Butler's Darwin Among the Machines (1863), and expanded upon by George Dyson (science historian) in his book of the same name (1998). Several futurists and science fiction writers have predicted that human beings and machines will merge in the future into cyborgs that are more capable and powerful than either. This idea, called transhumanism, has roots in Aldous Huxley and Robert Ettinger, is now associated with robot designer Hans Moravec, cyberneticist Kevin Warwick and Ray Kurzweil.[28] Transhumanism has been illustrated in fiction as well, for example on the manga Ghost in the Shell.

[edit] History of AI research

Main articles: history of artificial intelligence and timeline of artificial intelligence

In the middle of the 20th century, a handful of scientists began a new approach to building intelligent machines, based on recent discoveries in neurology, a new mathematical theory of information, an understanding of control and stability called cybernetics, and above all, by the invention of the digital computer, a machine based on the abstract essence of mathematical reasoning.[30]

The field of modern AI research was founded at conference on the campus of Dartmouth College in the summer of 1956.[31] Those who attended would become the leaders of AI research for many decades, especially John McCarthy, Marvin Minsky, Allen Newell and Herbert Simon, who founded AI laboratories at MIT, CMU and Stanford. They and their students wrote programs that were, to most people, simply astonishing:[32] computers were solving word problems in algebra, proving logical theorems and speaking English.[33] By the middle 60s their research was heavily funded by the U.S. Department of Defense[34] and they were optimistic about the future of the new field:

* 1965, H. A. Simon: "[M]achines will be capable, within twenty years, of doing any work a man can do"[35]
* 1967, Marvin Minsky: "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved."[36]

These predictions, and many like them, would not come true. They had failed to recognize the difficulty of some of the problems they faced.[37] In 1974, in response to the criticism of England's Sir James Lighthill and ongoing pressure from Congress to fund more productive projects, DARPA cut off all undirected, exploratory research in AI. This was the first AI Winter.[38]

In the early 80s, AI research was revived by the commercial success of expert systems; applying the knowledge and analytical skills of one or more human experts. By 1985 the market for AI had reached more than a billion dollars.[39] Minsky and others warned the community that enthusiasm for AI had spiraled out of control and that disappointment was sure to follow.[40] Beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, more lasting AI Winter began.[41]

In the 90s and early 21st century AI achieved its greatest successes, albeit somewhat behind the scenes. Artificial intelligence was adopted throughout the technology industry, providing the heavy lifting for logistics, data mining, medical diagnosis and many other areas.[42] The success was due to several factors: the incredible power of computers today (see Moore's law), a greater emphasis on solving specific subproblems, the creation of new ties between AI and other fields working on similar problems, and above all a new commitment by researchers to solid mathematical methods and rigorous scientific standards.[43]

[edit] Philosophy of AI
Mind and Brain Portal

Main article: philosophy of artificial intelligence

Can the brain be simulated? Does this prove machines can think?
Can the brain be simulated? Does this prove machines can think?

The philosophy of artificial intelligence considers the question "Can machines think?" Alan Turing, in his classic 1950 paper, Computing Machinery and Intelligence, was the first to try to answer it. In the years since, several answers have been given:[44]

* Turing's "polite convention": If a machine acts as intelligently as a human being, then it is as intelligent as a human being. This "convention" forms the basis of the Turing test.[45]
* The artificial brain argument: The brain can be simulated. This argument combines the idea that a Turing complete machine can simulate any process, with the materialist idea that the mind is the result of a physical process in the brain.[46]
* The Dartmouth proposal: Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it. This assertion was printed in the program for the Dartmouth Conference of 1956, and represents the position of most working AI researchers.[47]
* Newell and Simon's physical symbol system hypothesis: A physical symbol system has the necessary and sufficient means of general intelligent action. This statement claims that essence of intelligence is symbol manipulation.[48] Hubert Dreyfus argued that, on the contrary, human expertise depends on unconscious instinct rather than conscious symbol manipulation and on having a "feel" for the situation rather than explicit symbolic knowledge.[49]
* Gödel's incompleteness theorem: A physical symbol system can not prove all true statements. Roger Penrose is among those who claim that Gödel's theorem limits what machines can do.[50]
* Searle's "strong AI position": A physical symbol system can have a mind and mental states. Searle counters this assertion with his Chinese room argument, which asks us to look inside the computer and try to find where the "mind" might be.[51]

[edit] AI research

[edit] Problems of AI

While there is no universally accepted definition of intelligence,[52] AI researchers have studied several traits that are considered essential.[6]

[edit] Deduction, reasoning, problem solving

Early AI researchers developed algorithms that imitated the process of conscious, step-by-step reasoning that human beings use when they solve puzzles, play board games, or make logical deductions.[53] By the late 80s and 90s, AI research had also developed highly successful methods for dealing with uncertain or incomplete information, employing concepts from probability and economics.[54]

For difficult problems, most of these algorithms can require enormous computational resources — most experience a "combinatorial explosion": the amount of memory or computer time required becomes astronomical when the problem goes beyond a certain size. The search for more efficient problem solving algorithms is a high priority for AI research.[55]

It is not clear, however, that conscious human reasoning is any more efficient when faced with a difficult abstract problem. Cognitive scientists have demonstrated that human beings solve most of their problems using unconscious reasoning, rather than the conscious, step-by-step deduction that early AI research was able to model.[56] Embodied cognitive science argues that unconscious sensorimotor skills are essential to our problem solving abilities. It is hoped that sub-symbolic methods, like computational intelligence and situated AI, will be able to model these instinctive skills. The problem of unconscious problem solving, which forms part of our commonsense reasoning, is largely unsolved.

[edit] Knowledge representation

Main articles: knowledge representation and commonsense knowledge

Knowledge representation[57] and knowledge engineering[58] are central to AI research. Many of the problems machines are expected to solve will require extensive knowledge about the world. Among the things that AI needs to represent are: objects, properties, categories and relations between objects;[59] situations, events, states and time;[60] causes and effects;[61] knowledge about knowledge (what we know about what other people know);[62] and many other, less well researched domains. A complete representation of "what exists" is an ontology[63] (borrowing a word from traditional philosophy), of which the most general are called upper ontologies.

Among the most difficult problems in knowledge representation are:

* Default reasoning and the qualification problem: Many of the things people know take the form of "working assumptions." For example, if a bird comes up in conversation, people typically picture an animal that is fist sized, sings, and flies. None of these things are true about birds in general. John McCarthy identified this problem in 1969[64] as the qualification problem: for any commonsense rule that AI researchers care to represent, there tend to be a huge number of exceptions. Almost nothing is simply true or false in the way that abstract logic requires. AI research has explored a number of solutions to this problem.[65]
* Unconscious knowledge: Much of what people know isn't represented as "facts" or "statements" that they could actually say out loud. They take the form of intuitions or tendencies and are represented in the brain unconsciously and sub-symbolically. This unconscious knowledge informs, supports and provides a context for our conscious knowledge. As with the related problem of unconscious reasoning, it is hoped that situated AI or computational intelligence will provide ways to represent this kind of knowledge.
* The breadth of common sense knowledge: The number of atomic facts that the average person knows is astronomical. Research projects that attempt to build a complete knowledge base of commonsense knowledge, such as Cyc, require enormous amounts of tedious step-by-step ontological engineering — they must be built, by hand, one complicated concept at a time.[66]

[edit] Planning

Main article: automated planning and scheduling

Intelligent agents must be able to set goals and achieve them.[67] They need a way to visualize the future: they must have a representation of the state of the world and be able to make predictions about how their actions will change it. They must also attempt to determine the utility or "value" of the choices available to it.[68]

In some planning problems, the agent can assume that it is the only thing acting on the world and it can be certain what the consequences of it's actions may be.[69] However, if this is not true, it must periodically check if the world matches its predictions and it must change its plan as this becomes necessary, requiring the agent to reason under uncertainty.[70]

Multi-agent planning tries to determine the best plan for a community of agents, using cooperation and competition to achieve a given goal. Emergent behavior such as this is used by both evolutionary algorithms and swarm intelligence.[71]

[edit] Learning

Main article: machine learning

Important machine learning[72] problems are:

* Unsupervised learning: find a model that matches a stream of input "experiences", and be able to predict what new "experiences" to expect.
* Supervised learning, such as classification (be able to determine what category something belongs in, after seeing a number of examples of things from each category), or regression (given a set of numerical input/output examples, discover a continuous function that would generate the outputs from the inputs).
* Reinforcement learning:[73] the agent is rewarded for good responses and punished for bad ones. (These can be analyzed in terms decision theory, using concepts like utility).

[edit] Natural language processing

Main article: natural language processing

Natural language processing[74] gives machines the ability to read and understand the languages human beings speak. Many researchers hope that a sufficiently powerful natural language processing system would be able to acquire knowledge on its own, by reading the existing text available over the internet. Some straightforward applications of natural language processing include information retrieval (or text mining) and machine translation.[75]

[edit] Motion and manipulation
ASIMO uses sensors and intelligent algorithms to avoid obstacles and navigate stairs.
ASIMO uses sensors and intelligent algorithms to avoid obstacles and navigate stairs.

Main article: robotics

The field of robotics[76] is closely related to AI. Intelligence is required for robots to be able to handle such tasks as object manipulation[77] and navigation, with sub-problems of localization (knowing where you are), mapping (learning what is around you) and motion planning (figuring out how to get there).[78]

[edit] Perception

Main articles: machine perception, computer vision, and speech recognition

Machine perception[79] is the ability to use input from sensors (such as cameras, microphones, sonar and others more exotic) to deduce aspects of the world. Computer vision[80] is the ability to analyze visual input. A few selected subproblems are speech recognition,[81] facial recognition and object recognition.[82]

[edit] Social intelligence

Main article: affective computing

Kismet, a robot with rudimentary social skills.
Kismet, a robot with rudimentary social skills.

Emotion and social skills play two roles for an intelligent agent:[83]

* It must be able to predict the actions of others, by understanding their motives and emotional states. (This involves elements of game theory, decision theory, as well as the ability to model human emotions and the perceptual skills to detect emotions.)
* For good human-computer interaction, an intelligent machine also needs to display emotions — at the very least it must appear polite and sensitive to the humans it interacts with. At best, it should appear to have normal emotions itself.

[edit] General intelligence

Main articles: strong AI and AI-complete

Most researchers hope that their work will eventually be incorporated into a machine with general intelligence (known as strong AI), combining all the skills above and exceeding human abilities at most or all of them.[7] A few believe that anthropomorphic features like artificial consciousness or an artificial brain may be required for such a project.

Many of the problems above are considered AI-complete: to solve one problem, you must solve them all. For example, even a straightforward, specific task like machine translation requires that the machine follow the author's argument (reason), know what it's talking about (knowledge), and faithfully reproduce the author's intention (social intelligence). Machine translation, therefore, is believed to be AI-complete: it may require strong AI to be done as well as humans can do it.[84]

[edit] Approaches to AI

There are as many approaches to AI as there are AI researchers—any coarse categorization is likely to be unfair to someone. Artificial intelligence communities have grown up around particular problems, institutions and researchers, as well as the theoretical insights that define the approaches described below. Artificial intelligence is a young science and is still a fragmented collection of subfields. At present, there is no established unifying theory that links the subfields into a coherent whole.

[edit] Cybernetics and brain simulation

In the 40s and 50s, a number of researchers explored the connection between neurology, information theory, and cybernetics. Some of them built machines that used electronic networks to exhibit rudimentary intelligence, such as W. Grey Walter's turtles and the Johns Hopkins Beast. Many of these researchers gathered for meetings of the Teleological Society at Princeton and the Ratio Club in England.[85]

[edit] Traditional symbolic AI

When access to digital computers became possible in the middle 1950s, AI research began to explore the possibility that human intelligence could be reduced to symbol manipulation. The research was centered in three institutions: CMU, Stanford and MIT, and each one developed its own style of research. John Haugeland named these approaches to AI "good old fashioned AI" or "GOFAI".[86]

Cognitive simulation
Economist Herbert Simon and Alan Newell studied human problem solving skills and attempted to formalize them, and their work laid the foundations of the field of artificial intelligence, as well as cognitive science, operations research and management science. Their research team performed psychological experiments to demonstrate the similarities between human problem solving and the programs (such as their "General Problem Solver") they were developing. This tradition, centered at Carnegie Mellon University,[87] would eventually culminate in the development of the Soar architecture in the middle 80s.[88]

Logical AI
Unlike Newell and Simon, John McCarthy felt that machines did not need to simulate human thought, but should instead try find the essence of abstract reasoning and problem solving, regardless of whether people used the same algorithms.[89] His laboratory at Stanford (SAIL) focussed on using formal logic to solve wide variety of problems, including knowledge representation, planning and learning. Work in logic led to the development of the programming language Prolog and the science of logic programming.[90]

"Scruffy" symbolic AI
Researchers at MIT (such as Marvin Minsky and Seymour Papert) found that solving difficult problems in vision and natural language processing required ad-hoc solutions -- they argued that there was no easy answer, no simple and general principle (like logic) that would capture all the aspects of intelligent behavior. Roger Schank described their "anti-logic" approaches as "scruffy" (as opposed to the "neat" paradigms at CMU and Stanford),[91] and this still forms the basis of research into commonsense knowledge bases (such as Doug Lenat's Cyc) which must be built one complicated concept at a time.

Knowledge based AI
When computers with large memories became available around 1970, researchers from all three traditions began to build knowledge into AI applications. This "knowledge revolution" led to the development and deployment of expert systems (introduced by Edward Feigenbaum), the first truly successful form of AI software.[92] The knowledge revolution was also driven by the realization that truly enormous of amounts knowledge would be required by many simple AI applications.

[edit] Sub-symbolic AI

During the 1960s, symbolic approaches had achieved great success at simulating high-level thinking in small demonstration programs. Approaches based on cybernetics or neural networks were abandoned or pushed into the background.[93] By the 1980s, however, progress in symbolic AI seemed to stall and many believed that symbolic systems would never be able to imitate all the processes of human cognition, especially perception, robotics, learning and pattern recognition. A number of researchers began to look into "sub-symbolic" approaches to specific AI problems.[94]

Bottom-up, situated, behavior based or nouvelle AI
Researchers from the related field of robotics, such as Rodney Brooks, rejected symbolic AI and focussed on the basic engineering problems that would allow robots to move and survive.[95] Their work revived the non-symbolic viewpoint of the early cybernetics researchers of the 50s and reintroduced the use of control theory in AI. These approaches are also conceptually related to the embodied mind thesis.

Computational Intelligence
Interest in neural networks and "connectionism" was revived by David Rumelhart and others in the middle 1980s.[96] These and other sub-symbolic approaches, such as fuzzy systems and evolutionary computation, are now studied collectively by the emerging discipline of computational intelligence.[97]

The new neats
In the 1990s, AI researchers developed sophisticated mathematical tools to solve specific subproblems. These tools are truly scientific, in the sense that their results are both measurable and verifiable, and they have been responsible for many of AI's recent successes. The shared mathematical language has also permitted a high level of collaboration with more established fields (like mathematics, economics or operations research). Russell & Norvig (2003) describe this movement as nothing less than a "revolution" and "the victory of the neats."[98]

[edit] Intelligent agent paradigm

The "intelligent agent" paradigm became widely accepted during the 1990s.[99][100] Although earlier researchers had proposed modular "divide and conquer" approaches to AI,[101] the intelligent agent did not reach its modern form until Judea Pearl, Alan Newell and others brought concepts from decision theory and economics into the study of AI.[102] When the economist's definition of a rational agent was married to computer science's definition of an object or module, the intelligent agent paradigm was complete.

An intelligent agent is a system that perceives its environment and takes actions which maximizes its chances of success. The simplest intelligent agents are programs that solve specific problems. The most complicated intelligent agents would be rational, thinking human beings.[100]

The paradigm gives researchers license to study isolated problems and find solutions that are both verifiable and useful, without agreeing on one single approach. An agent that solves a specific problem can use any approach that works — some agents are symbolic and logical, some are sub-symbolic neural networks and some can be based on new approaches (without forcing researchers to reject old approaches that have proven useful). The paradigm gives researchers a common language to describe problems and share their solutions with each other and with other fields—such as decision theory—that also use concepts of abstract agents.

[edit] Integrating the approaches

An agent architecture or cognitive architecture allows researchers to build more versatile and intelligent systems out of interacting intelligent agents in a multi-agent system.[103] A system with both symbolic and sub-symbolic components is a hybrid intelligent system, and the study of such systems is artificial intelligence systems integration. A hierarchical control system provides a bridge between sub-symbolic AI at its lowest, reactive levels and traditional symbolic AI at its highest levels, where relaxed time constraints permit planning and world modelling.[104] Rodney Brooks' subsumption architecture was an early proposal for such a hierarchical system.

[edit] Tools of AI research

In the course of 50 years of research, AI has developed a large number of tools to solve the most difficult problems in computer science. A few of the most general of these methods are discussed below.

[edit] Search

Main article: search algorithm

Many problems in AI can be solved in theory by intelligently searching through many possible solutions:[105] Reasoning can be reduced to performing a search. For example, logical proof can be viewed as searching for a path that leads from premises to conclusions, where each step is the application of an inference rule.[106] Planning algorithms search through trees of goals and subgoals, attempting to find a path to a target goal.[107] Robotics algorithms for moving limbs and grasping objects use local searches in configuration space.[77] Even some learning algorithms have at their core a search engine.[108]

There are several types of search algorithms:

* "Uninformed" search algorithms eventually search through every possible answer until they locate their goal.[109] Naive algorithms quickly run into problems when they expand the size of their search space to astronomical numbers. The result is a search that is too slow or never completes.
* Heuristic or "informed" searches use heuristic methods to eliminate choices that are unlikely to lead to their goal, thus drastically reducing the number of possibilities they must explore.[110]
* Local searches, such as hill climbing, simulated annealing and beam search, use techniques borrowed from optimization theory.[111]
* Genetic algorithms are a form of optimization search that imitates the process of natural selection, searching for an artificial phenotype (i.e. any sort of pattern) which passes a fitness measure by producing many copies of the most successful versions (imitating inheritance) and modifying them slightly (imitating mutation).[112]

[edit] Logic

Main article: logic programming

Logic[113] was introduced into AI research by John McCarthy in his 1958 Advice Taker proposal. The most important technical development was J. Alan Robinson's discovery of the resolution and unification algorithm for logical deduction in 1963. This procedure is simple, complete and entirely algorithmic, and can easily be performed by digital computers.[114] However, a naive implementation of the algorithm quickly leads to a combinatorial explosion or an infinite loop. In 1974, Robert Kowalski suggested representing logical expressions as Horn clauses (statements in the form of rules: "if p then q"), which reduced logical deduction to backward chaining or forward chaining. This greatly alleviated (but did not eliminate) the problem.[106][115]

Logic is used for knowledge representation and problem solving, but it can be applied to other problems as well. For example, the satplan algorithm uses logic for planning,[116] and inductive logic programming is a method for learning.[117]

There are several different forms of logic used in AI research.

* Propositional logic[118] or sentential logic is the logic of statements which can be true or false.

* First order logic[119] also allows the use of quantifiers and predicates, and can express facts about objects, their properties, and their relations with each other.

* Fuzzy logic, a version of first order logic which allows the truth of statement to represented as a value between 0 and 1, rather than simply True (1) or False (0). Fuzzy systems can be used for uncertain reasoning and have been widely used in modern industrial and consumer product control systems.[120]

* Default logics, non-monotonic logics and circumscription are forms of logic designed to help with default reasoning and the qualification problem.[65]
* Several extensions of logic have been designed to handle specific domains of knowledge, such as: description logics;[59] situation calculus, event calculus and fluent calculus (for representing events and time);[60] causal calculus;[61] belief calculus; and modal logics.[62]

[edit] Probabilistic methods for uncertain reasoning

Main articles: Bayesian network, hidden Markov model, Kalman filter, decision theory, and utility theory

Many problems in AI (in reasoning, planning, learning, perception and robotics) require the agent to operate with incomplete or uncertain information. Starting in the late 80s and early 90s, Judea Pearl and others championed the use of methods drawn from probability theory and economics to devise a number of powerful tools to solve these problems.[121]

Bayesian networks[122] are very general tool that can be used for a large number of problems: reasoning (using the Bayesian inference algorithm),[123] learning (using the expectation-maximization algorithm),[124] planning (using decision networks)[125] and perception (using dynamic Bayesian networks).[126]

Probabilistic algorithms can also be used for filtering, prediction, smoothing and finding explanations for streams of data, helping perception systems to analyze processes that occur over time[127] (e.g., hidden Markov models[128] and Kalman filters[129]).

Planning problems have also taken advantages of other tools from economics, such as decision theory and decision analysis,[130] information value theory,[68] Markov decision processes,[131] dynamic decision networks,[131] game theory and mechanism design[132]

[edit] Classifiers and statistical learning methods

Main articles: classifier (mathematics), statistical classification, and machine learning

The simplest AI applications can be divided into two types: classifiers ("if shiny then diamond") and controllers ("if shiny then pick up"). Controllers do however also classify conditions before inferring actions, and therefore classification forms a central part of many AI systems.

Classifiers[133] are functions that use pattern matching to determine a closest match. They can be tuned according to examples, making them very attractive for use in AI. These examples are known as observations or patterns. In supervised learning, each pattern belongs to a certain predefined class. A class can be seen as a decision that has to be made. All the observations combined with their class labels are known as a data set.

When a new observation is received, that observation is classified based on previous experience. A classifier can be trained in various ways; there are many statistical and machine learning approaches.

A wide range of classifiers are available, each with its strengths and weaknesses. Classifier performance depends greatly on the characteristics of the data to be classified. There is no single classifier that works best on all given problems; this is also referred to as the "no free lunch" theorem. Various empirical tests have been performed to compare classifier performance and to find the characteristics of data that determine classifier performance. Determining a suitable classifier for a given problem is however still more an art than science.

The most widely used classifiers are the neural network,[134] kernel methods such as the support vector machine,[135] k-nearest neighbor algorithm,[136] Gaussian mixture model,[137] naive Bayes classifier,[138] and decision tree.[108] The performance of these classifiers have been compared over a wide range of classification tasks[139] in order to find data characteristics that determine classifier performance.

[edit] Neural networks

Main articles: neural networks and connectionism

A neural network is an interconnected group of nodes, akin to the vast network of neurons in the human brain.
A neural network is an interconnected group of nodes, akin to the vast network of neurons in the human brain.

The study of neural networks[134] began with cybernetics researchers, working in the decade before the field AI research was founded. In the 1960s Frank Rosenblatt developed an important early version, the perceptron.[140] Paul Werbos discovered the backpropagation algorithm in 1974,[141] which led to a renaissance in neural network research and connectionism in general in the middle 1980s. The Hopfield net, a form of attractor network, was first described by John Hopfield in 1982.

Neural networks are applied to the problem of learning, using such techniques as Hebbian learning[142] and the relatively new field of Hierarchical Temporal Memory which simulates the architecture of the neocortex.[143]

[edit] Social and emergent models

Main article: evolutionary computation

Several algorithms for learning use tools from evolutionary computation, such as genetic algorithms[144] and swarm intelligence.[145]

[edit] Control theory

Main article: intelligent control

Control theory, the grandchild of cybernetics, has many important applications, especially in robotics.[146]

[edit] Specialized languages

Main articles: IPL, Lisp (programming language), Prolog, STRIPS, and Planner (programming language)

AI researchers have developed several specialized languages for AI research:

* IPL, one of the first programming languages, developed by Alan Newell, Herbert Simon and J. C. Shaw.[147]

* Lisp[148] was developed by John McCarthy at MIT in 1958.[149] There are many dialects of Lisp in use today.

* Prolog,[150] a language based on logic programming, was invented by French researchers Alain Colmerauer and Phillipe Roussel, in collaboration with Robert Kowalski of the University of Edinburgh.[115]

* STRIPS, a planning language developed at Stanford in the 1960s.
* Planner developed at MIT around the same time.

AI applications are also often written in standard languages like C++ and languages designed for mathematics, such as Matlab and Lush.

[edit] Evaluating artificial intelligence

How can one determine if an agent is intelligent? In 1950, Alan Turing proposed a general procedure to test the intelligence of an agent now known as the Turing test. This procedure allows almost all the major problems of artificial intelligence to be tested. However, it is a very difficult challenge and at present all agents fail.

Artificial intelligence can also be evaluated on specific problems such as small problems in chemistry, hand-writing recognition and game-playing. Such tests have been termed subject matter expert Turing tests. Smaller problems provide more achievable goals and there are an ever-increasing number of positive results.

The broad classes of outcome for an AI test are:

* optimal: it is not possible to perform better
* strong super-human: performs better than all humans
* super-human: performs better than most humans
* sub-human: performs worse than most humans

For example, performance at checkers is optimal[151], performance at chess is super-human and nearing strong super-human[152], performance at Go is sub-human[153], and performance at many everyday tasks performed by humans is sub-human.

[edit] Competitions and prizes

Main article: Competitions and prizes in artificial intelligence

There are a number of competitions and prizes to promote research in artificial intelligence. The main areas promoted are: general machine intelligence, conversational behaviour, data-mining, driverless cars, robot soccer and games.

[edit] Applications of artificial intelligence

Main article: Applications of artificial intelligence

Artificial intelligence has successfully been used in a wide range of fields including medical diagnosis, stock trading, robot control, law, scientific discovery and toys. Frequently, when a technique reaches mainstream use it is no longer considered artificial intelligence, sometimes described as the AI effect.[154]

[edit] See also

* List of basic artificial intelligence topics
* List of AI researchers
* List of AI projects
* List of important AI publications

[edit] Notes

1. ^ Poole, Mackworth & Goebel 1998, p. 1 (who use the term "computational intelligence" as a synonym for artificial intelligence). Other textbooks that define AI this way include Nilsson (1998), and Russell & Norvig (2003) (who prefer the term "rational agent") and write "The whole-agent view is now widely accepted in the field" (Russell & Norvig 2003, p. 55)
2. ^ This definition, in terms of goals, actions, perception and environment, is due to Russell & Norvig (2003). Other definitions also include knowledge and learning as additional components.
3. ^ Abstract Intelligent Agents: Paradigms, Foundations and Conceptualization Problems, A.M. Gadomski, J.M. Zytkow, in "Abstract Intelligent Agent, 2". Printed by ENEA, Rome 1995, ISSN/1120-558X]
4. ^ Although there is some controversy on this point (see Crevier 1993, p. 50), McCarthy states unequivocally "I came up with the term" in a c|net interview. (See Getting Machines to Think Like Us.)
5. ^ See John McCarthy, What is Artificial Intelligence?
6. ^ a b This list of intelligent traits is based on the topics covered by the major AI textbooks, including: Russell & Norvig 2003, Luger & Stubblefield 2004, Poole, Mackworth & Goebel 1998 and Nilsson 1998.
7. ^ a b General intelligence (strong AI) is discussed by popular introductions to AI, such as: Kurzweil 1999, Kurzweil 2005, Hawkins & Blakeslee 2004
8. ^ Russell & Norvig 2003, pp. 5-16
9. ^ See AI Topics: applications
10. ^ a b Poole, Mackworth & Goebel 1998, p. 1
11. ^ The name of the journal Intelligent Systems
12. ^ Russell & Norvig 2003, p. 17
13. ^ McCorduck 2004, p. 5, Russell & Norvig 2003, p. 939
14. ^ The Egyptian statue of Amun is discussed by Crevier (1993, p. 1). McCorduck (2004, pp. 6-9) discusses Greek statues. Hermes Trismegistus expressed the common belief that with these statues, craftsman had reproduced "the true nature of the gods", their sensus and spiritus. McCorduck makes the connection between sacred automatons and Mosaic law (developed around the same time), which expressly forbids the worship of robots.
15. ^ McCorduck 2004, p. 13-14 (Paracelsus)
16. ^ Needham 1986, p. 53
17. ^ McCorduck 2004, p. 6
18. ^ A Thirteenth Century Programmable Robot
19. ^ McCorduck 2004, p. 17
20. ^ McCorduck 2004, p. xviii
21. ^ McCorduck (2004, p. 190-25) discusses Frankenstein and identifies the key ethical issues as scientific hubris and the suffering of the monster, e.g. robot rights.
22. ^ Robots could demand legal rights
23. ^ See the Times Online, Human rights for robots? We’re getting carried away
24. ^ robot rights: Russell Norvig, p. 964
25. ^ Russell & Norvig (2003, p. 960-961)
26. ^ Kurzweil 2004
27. ^ Joseph Weizenbaum (the AI researcher who developed the first chatterbot program, ELIZA) argued in 1976 that the misuse of artificial intelligence has the potential to devalue human life. Weizenbaum: Crevier 1993, pp. 132−144, McCorduck 2004, pp. 356-373, Russell & Norvig 2003, p. 961 and Weizenbaum 1976
28. ^ a b Singularity, transhumanism: Kurzweil 2005, Russell & Norvig 2003, p. 963
29. ^ Quoted in McCorduck (2004, p. 401)
30. ^ Among the researchers who laid the foundations of the theory of computation, cybernetics, information theory and neural networks were Claude Shannon, Norbert Weiner, Warren McCullough, Walter Pitts, Donald Hebb, Donald McKay, Alan Turing and John Von Neumann. McCorduck 2004, pp. 51-107, Crevier 1993, pp. 27-32, Russell & Norvig 2003, pp. 15,940, Moravec 1988, p. 3.
31. ^ Crevier 1993, pp. 47-49, Russell & Norvig 2003, p. 17
32. ^ Russell and Norvig write "it was astonishing whenever a computer did anything kind of smartish." Russell & Norvig 2003, p. 18
33. ^ Crevier 1993, pp. 52-107, Moravec 1988, p. 9 and Russell & Norvig 2003, p. 18-21. The programs described are Daniel Bobrow's STUDENT, Newell and Simon's Logic Theorist and Terry Winograd's SHRDLU.
34. ^ Crevier 1993, pp. 64-65
35. ^ Simon 1965, p. 96 quoted in Crevier 1993, p. 109
36. ^ Minsky 1967, p. 2 quoted in Crevier 1993, p. 109
37. ^ See History of artificial intelligence — the problems.
38. ^ Crevier 1993, pp. 115-117, Russell & Norvig 2003, p. 22, NRC 1999 under "Shift to Applied Research Increases Investment." and also see Howe, J. "Artificial Intelligence at Edinburgh University : a Perspective"
39. ^ Crevier 1993, pp. 161-162,197-203 and and Russell & Norvig 2003, p. 24
40. ^ Crevier 1993, p. 203
41. ^ Crevier 1993, pp. 209-210
42. ^ Russell Norvig, p. 28,NRC 1999 under "Artificial Intelligence in the 90s"
43. ^ Russell Norvig, pp. 25-26
44. ^ All of these positions are mentioned in standard discussions of the subject, such as Russell & Norvig 2003, pp. 947-960 and Fearn 2007, pp. 38-55
45. ^ Turing 1950, Haugeland 1985, pp. 6-9, Crevier 1993, p. 24, Russell & Norvig 2003, pp. 2-3 and 948
46. ^ Kurzweil 2005, p. 262. Also see Russell Norvig, p. 957 and Crevier 1993, pp. 271 and 279. The most extreme form of this argument (the brain replacement scenario) was put forward by Clark Glymour in the mid-70s and was touched on by Zenon Pylyshyn and John Searle in 1980. It is now associated with Hans Moravec and Ray Kurzweil.
47. ^ McCarthy et al. 1955 See also Crevier 1993, p. 28
48. ^ Newell & Simon 1963 and Russell & Norvig 2003, p. 18
49. ^ Dreyfus criticized a version of the physical symbol system hypothesis that he called the "psychological assumption": "The mind can be viewed as a device operating on bits of information according to formal rules". Dreyfus 1992, p. 156. See also Dreyfus & Dreyfus 1986, Russell & Norvig 2003, pp. 950-952, Crevier & 1993 120-132 and Hearn 2007, pp. 50-51
50. ^ This is a paraphrase of the most important implication of Gödel's theorems, according Hofstadter (1979). See also Russell & Norvig 2003, p. 949, Gödel 1931, Church 1936, Kleene 1935, Turing 1937, Turing 1950 under “(2) The Mathematical Objection”
51. ^ Searle 1980. See also Russell & Norvig (2003, p. 947): "The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the 'strong AI' hypothesis," although Searle's arguments, such as the Chinese Room, apply only to physical symbol systems, not to machines in general (he would consider the brain a machine). Also, notice that the positions as Searle states them don't make any commitment to how much intelligence the system has: it is one thing to say a machine can act intelligently, it is another to say it can act as intelligently as a human being.
52. ^ "We cannot yet characterize in general what kinds of computational procedures we want to call intelligent." John McCarthy, Basic Questions
53. ^ Problem solving, puzzle solving, game playing and deduction: Russell & Norvig 2003, chpt. 3-9, Poole et al. chpt. 2,3,7,9, Luger & Stubblefield 2004, chpt. 3,4,6,8, Nilsson, chpt. 7-12.
54. ^ Uncertain reasoning: Russell & Norvig 2003, pp. 452-644, Poole, Mackworth & Goebel 1998, pp. 345-395, Luger & Stubblefield 2004, pp. 333-381, Nilsson 1998, chpt. 19
55. ^ Intractability and efficiency and the combinatorial explosion: Russell & Norvig 2003, pp. 9, 21-22
56. ^ Several famous examples: Wason (1966) showed that people do poorly on completely abstract problems, but if the problem is restated to allowed the use of intuitive social intelligence, performance dramatically improves. (See Wason selection task) Tversky, Slovic & Kahnemann (1982) have shown that people are terrible at elementary problems that involve uncertain reasoning. (See list of cognitive biases for several examples). Lakoff & Nunez (2000) have controversially argued that even our skills at mathematics depend on knowledge and skills that come from "the body", i.e. sensorimotor and perceptual skills. (See Where Mathematics Comes From)
57. ^ Knowledge representation: ACM 1998, I.2.4, Russell & Norvig 2003, pp. 320-363, Poole, Mackworth & Goebel 1998, pp. 23-46, 69-81, 169-196, 235-277, 281-298, 319-345, Luger & Stubblefield 2004, pp. 227-243, Nilsson 1998, chpt. 18
58. ^ Knowledge engineering: Russell & Norvig 2003, pp. 260-266, Poole, Mackworth & Goebel 1998, pp. 199-233, Nilsson 1998, chpt. ~17.1-17.4
59. ^ a b Representing categories and relations: Semantic networks, description logics, inheritance (including frames and scripts): Russell & Norvig 2003, pp. 349-354, Poole, Mackworth & Goebel 1998, pp. 174-177, Luger & Stubblefield 2004, pp. 248-258, Nilsson 1998, chpt. 18.3
60. ^ a b Representing events and time: Situation calculus, event calculus, fluent calculus (including solving the frame problem): Russell & Norvig 2003, pp. 328-341, Poole, Mackworth & Goebel 1998, pp. 281-298, Nilsson 1998, chpt. 18.2
61. ^ a b Causal calculus: Poole, Mackworth & Goebel 1998, pp. 335-337
62. ^ a b Representing knowledge about knowledge: Belief calculus, modal logics: Russell & Norvig 2003, pp. 341-344, Poole, Mackworth & Goebel 1998, pp. 275-277
63. ^ Ontology: Russell & Norvig 2003, pp. 320-328
64. ^ McCarthy & Hayes 1969
65. ^ a b Default reasoning and default logic, non-monotonic logics, circumscription, closed world assumption, abduction (Poole et al. places abduction under "default reasoning". Luger et al. places this under "uncertain reasoning"): Russell & Norvig 2003, pp. 354-360, Poole, Mackworth & Goebel 1998, pp. 248-256, 323-335, Luger & Stubblefield 2004, pp. 335-363, Nilsson 1998, ~18.3.3
66. ^ Crevier 1993, pp. 113-114, Moravec 1988, p. 13, Lenat 1989 (Introduction), Russell & Norvig 2003, p. 21
67. ^ Planning: ACM 1998, ~I.2.8, Russell & Norvig 2003, pp. 375-459, Poole, Mackworth & Goebel 1998, pp. 281-316, Luger & Stubblefield 2004, pp. 314-329, Nilsson 1998, chpt. 10.1-2, 22
68. ^ a b Information value theory: Russell & Norvig 2003, pp. 600-604
69. ^ Classical planning: Russell & Norvig 2003, pp. 375-430, Poole, Mackworth & Goebel 1998, pp. 281-315, Luger & Stubblefield 2004, pp. 314-329, Nilsson 1998, chpt. 10.1-2, 22
70. ^ Planning and acting in non-deterministic domains: conditional planning, execution monitoring, replanning and continuous planning: Russell & Norvig 2003, pp. 430-449
71. ^ Multi-agent planning and emergent behavior: Russell & Norvig 2003, pp. 449-455
72. ^ Learning: ACM 1998, I.2.6, Russell & Norvig 2003, pp. 649-788, Poole, Mackworth & Goebel 1998, pp. 397-438, Luger & Stubblefield 2004, pp. 385-542, Nilsson 1998, chpt. 3.3 , 10.3, 17.5, 20
73. ^ Reinforcement learning: Russell & Norvig 2003, pp. 763-788, Luger & Stubblefield 2004, pp. 442-449
74. ^ Natural language processing: ACM 1998, I.2.7, Russell & Norvig 2003, pp. 790-831, Poole, Mackworth & Goebel 1998, pp. 91-104, Luger & Stubblefield 2004, pp. 591-632
75. ^ Applications of natural language processing, including information retrieval (i.e. text mining) and machine translation Russell & Norvig 2003, pp. 840-857, Luger & Stubblefield 2004, pp. 623-630
76. ^ Robotics: ACM 1998, I.2.9, Russell & Norvig 2003, pp. 901-942, Poole, Mackworth & Goebel 1998, pp. 443-460
77. ^ a b Moving and configuration space: Russell Norivg, pp. 916-932
78. ^ Robotic mapping (localization, etc) Russell Norvig, pp. 908-915
79. ^ Machine perception: Russell & Norvig 2003, pp. 537-581, 863-898, Nilsson 1998, ~chpt. 6
80. ^ Computer vision: ACM 1998, I.2.10, Russell & Norvig 2003, pp. 863-898, Nilsson 1998, chpt. 6
81. ^ Speech recognition: ACM 1998, ~I.2.7, Russell & Norvig 2003, pp. 568-578
82. ^ Object recognition: Russell & Norvig 2003, pp. 885-892
83. ^ Minsky 2007, Picard 1997
84. ^ Shapiro 1992, p. 9
85. ^ Among the researchers who laid the foundations of cybernetics, information theory and neural networks were Claude Shannon, Norbert Weiner, Warren McCullough, Walter Pitts, Donald Hebb, Donald McKay, Alan Turing and John Von Neumann. McCorduck 2004, pp. 51-107 Crevier 1993, pp. 27-32, Russell & Norvig 2003, pp. 15,940, Moravec 1988, p. 3.
86. ^ Haugeland 1985, pp. 112-117
87. ^ Then called Carnegie Tech
88. ^ Crevier 1993, pp. 52-54, 258-263, Nilsson 1998, p. 275
89. ^ See Science at Google Books, and McCarthy's presentation at AI@50
90. ^ Crevier 1993, pp. 193-196
91. ^ Crevier 1993, pp. 163-176. Neats vs. scruffies: Crevier 1993, pp. 168.
92. ^ Crevier 1993, pp. 145-162
93. ^ The most dramatic case of sub-symbolic AI being pushed into the background was the devastating critique of perceptrons by Marvin Minsky and Seymour Papert in 1969. See History of AI, AI winter, or Frank Rosenblatt. (Crevier 1993, pp. 102-105).
94. ^ Nilsson (1998, p. 7) characterizes these newer approaches to AI as "sub-symbolic".
95. ^ Brooks 1990 and Moravec 1988
96. ^ Crevier 1993, pp. 214-215 and Russell & Norvig 2003, p. 25
97. ^ See IEEE Computational Intelligence Society
98. ^ Russell & Norvig 2003, p. 25-26
99. ^ "The whole-agent view is now widely accepted in the field" Russell & Norvig 2003, p. 55.
100. ^ a b The intelligent agent paradigm is discussed in major AI textbooks, such as: Russell & Norvig 2003, pp. 27, 32-58, 968-972, Poole, Mackworth & Goebel 1998, pp. 7-21, Luger & Stubblefield 2004, pp. 235-240
101. ^ For example, both John Doyle (Doyle 1983) and Marvin Minsky's popular classic The Society of Mind (Minsky 1986) used the word "agent" to describe modular AI systems.
102. ^ Russell & Norvig 2003, pp. 27, 55
103. ^ Agent architectures, hybrid intelligent systems, and multi-agent systems: ACM 1998, I.2.11, Russell & Norvig (1998, pp. 27, 932, 970-972) and Nilsson (1998, chpt. 25)
104. ^ Albus, J. S. 4-D/RCS reference model architecture for unmanned ground vehicles. In G Gerhart, R Gunderson, and C Shoemaker, editors, Proceedings of the SPIE AeroSense Session on Unmanned Ground Vehicle Technology, volume 3693, pages 11--20
105. ^ Search algorithms: Russell & Norvig 2003, pp. 59-189, Poole, Mackworth & Goebel 1998, pp. 113-163, Luger & Stubblefield 2004, pp. 79-164, 193-219, Nilsson 1998, chpt. 7-12
106. ^ a b Forward chaining, backward chaining, Horn clauses, and logical deduction as search: Russell & Norvig 2003, pp. 217-225, 280-294, Poole, Mackworth & Goebel 1998, pp. ~46-52, Luger & Stubblefield 2004, pp. 62-73, Nilsson 1998, chpt. 4.2, 7.2
107. ^ State space search and planning: Russell & Norvig 2003, pp. 382-387, Poole, Mackworth & Goebel 1998, pp. 298-305, Nilsson 1998, chpt. 10.1-2
108. ^ a b Decision tree: Russell & Norvig 2003, pp. 653-664, Poole, Mackworth & Goebel 1998, pp. 403-408, Luger & Stubblefield 2004, pp. 408-417
109. ^ Naive searches (breadth first search, depth first search and general state space search): Russell & Norvig 2003, pp. 59-93, Poole, Mackworth & Goebel 1998, pp. 113-132, Luger & Stubblefield 2004, pp. 79-121, Nilsson 1998, chpt. 8
110. ^ Heuristic or informed searches (e.g., greedy best first and A*): Russell & Norvig 2003, pp. 94-109, Poole, Mackworth & Goebel 1998, pp. pp. 132-147, Luger & Stubblefield 2004, pp. 133-150, Nilsson 1998, chpt. 9
111. ^ Optimization searches: Russell & Norvig 2003, pp. 110-116,120-129, Poole, Mackworth & Goebel 1998, pp. 56-163, Luger & Stubblefield 2004, pp. 127-133
112. ^ Genetic algorithms: Russell & Norvig 2003, pp. 116-119, Poole, Mackworth & Goebel 1998, pp. 162, Luger & Stubblefield 2004, pp. 509-530, Nilsson 1998, chpt. 4.2
113. ^ Logic: ACM 1998, ~I.2.3, Russell & Norvig 2003, pp. 194-310, Luger & Stubblefield 2004, pp. 35-77, Nilsson 1998, chpt. 13-16
114. ^ Resolution and unification: Russell & Norvig 2003, pp. 213-217, 275-280, 295-306, Poole, Mackworth & Goebel 1998, pp. 56-58, Luger & Stubblefield 2004, pp. 554-575, Nilsson 1998, chpt. 14 & 16
115. ^ a b History of logic programming: Crevier 1993, pp. 190-196. Advice Taker: McCorduck 2004, p. 51, Russell & Norvig 2003, pp. 19
116. ^ Satplan: Russell & Norvig 2003, pp. 402-407, Poole, Mackworth & Goebel 1998, pp. 300-301, Nilsson 1998, chpt. 21
117. ^ Explanation based learning, relevance based learning, inductive logic programming, case based reasoning: Russell & Norvig 2003, pp. 678-710, Poole, Mackworth & Goebel 1998, pp. 414-416, Luger & Stubblefield 2004, pp. ~422-442, Nilsson 1998, chpt. 10.3, 17.5
118. ^ Propositional logic: Russell & Norvig 2003, pp. 204-233, Luger & Stubblefield 2004, pp. 45-50 Nilsson 1998, chpt. 13
119. ^ First order logic and features such as equality: ACM 1998, ~I.2.4, Russell & Norvig 2003, pp. 240-310, Poole, Mackworth & Goebel 1998, pp. 268-275, Luger & Stubblefield 2004, pp. 50-62, Nilsson 1998, chpt. 15
120. ^ Fuzzy logic: Russell & Norvig 2003, pp. 526-527
121. ^ Russell & Norvig 2003, pp. 25-26 (on Judea Pearl's contribution). Stochastic methods are described in all the major AI textbooks: ACM 1998, ~I.2.3, Russell & Norvig 2003, pp. 462-644, Poole, Mackworth & Goebel 1998, pp. 345-395, Luger & Stubblefield 2004, pp. 165-191, 333-381, Nilsson 1998, chpt. 19
122. ^ Bayesian networks: Russell & Norvig 2003, pp. 492-523, Poole, Mackworth & Goebel 1998, pp. 361-381, Luger & Stubblefield 2004, pp. ~182-190, ~363-379, Nilsson 1998, chpt. 19.3-4
123. ^ Bayesian inference algorithm: Russell & Norvig 2003, pp. 504-519, Poole, Mackworth & Goebel 1998, pp. 361-381, Luger & Stubblefield 2004, pp. ~363-379, Nilsson 1998, chpt. 19.4 & 7
124. ^ Bayesian learning and the expectation-maximization algorithm: Russell & Norvig 2003, pp. 712-724, Poole, Mackworth & Goebel 1998, pp. 424-433, Nilsson 1998, chpt. 20
125. ^ Bayesian decision networks: Russell & Norvig 2003, pp. 597-600
126. ^ Dynamic Bayesian network: Russell & Norvig 2003, pp. 551-557
127. ^ Russell & Norvig 2003, pp. 537-581
128. ^ Hidden Markov model: Russell & Norvig 2003, pp. 549-551
129. ^ Kalman filter: Russell & Norvig 2003, pp. 551-557
130. ^ decision theory and decision analysis: Russell & Norvig 2003, pp. 584-597, Poole, Mackworth & Goebel 1998, pp. 381-394
131. ^ a b Markov decision processes and dynamic decision networks:Russell & Norvig 2003, pp. 613-631
132. ^ Game theory and mechanism design: Russell & Norvig 2003, pp. 631-643
133. ^ Statistical learning methods and classifiers: Russell & Norvig 2003, pp. 712-754, Luger & Stubblefield 2004, pp. 453-541
134. ^ a b Neural networks and connectionism: Russell & Norvig 2003, pp. 736-748, Poole, Mackworth & Goebel 1998, pp. 408-414, Luger & Stubblefield 2004, pp. 453-505, Nilsson 1998, chpt. 3
135. ^ Kernel methods: Russell & Norvig 2003, pp. 749-752
136. ^ K-nearest neighbor algorithm: Russell & Norvig 2003, pp. 733-736
137. ^ Gaussian mixture model: Russell & Norvig 2003, pp. 725-727
138. ^ Naive Bayes classifier: Russell & Norvig 2003, pp. 718
139. ^ van der Walt, Christiaan. Data characteristics that determine classifier performance.
140. ^ Perceptrons: Russell & Norvig 2003, pp. 740-743, Luger & Stubblefield 2004, pp. 458-467
141. ^ Backpropagation: Russell & Norvig 2003, pp. 744-748, Luger & Stubblefield 2004, pp. 467-474, Nilsson 1998, chpt. 3.3
142. ^ Competitive learning, Hebbian coincidence learning, Hopfield networks and attractor networks: Luger & Stubblefield 2004, pp. 474-505.
143. ^ Hawkins & Blakeslee 2004
144. ^ Genetic algorithms for learning: Luger & Stubblefield 2004, pp. 509-530, Nilsson 1998, chpt. 4.2
145. ^ Artificial life and society based learning: Luger & Stubblefield 2004, pp. 530-541
146. ^ Control theory: ACM 1998, ~I.2.8, Russell & Norvig 2003, pp. 926-932
147. ^ Crevier 1993, p. 46-48
148. ^ Lisp: Luger & Stubblefield 2004, pp. 723-821
149. ^ Crevier 1993, pp. 59-62, Russell & Norvig 2003, p. 18
150. ^ Prolog: Poole, Mackworth & Goebel 1998, pp. 477-491, Luger & Stubblefield 2004, pp. 641-676, 575-581
151. ^ Schaeffer, Jonathan (2007-07-19). Checkers Is Solved. Science. Retrieved on 2007-07-20.
152. ^ Computer Chess#Computers versus humans
153. ^ Computer Go#Computers versus humans
154. ^ AI set to exceed human brain power (web article). CNN.com (2006-07-26). Retrieved on 2008-02-26.

[edit] References

[edit] Major AI textbooks

* Luger, George & Stubblefield, William (2004), Artificial Intelligence: Structures and Strategies for Complex Problem Solving (5th ed.), The Benjamin/Cummings Publishing Company, Inc., pp. 720, ISBN 0-8053-4780-1,
* Nilsson, Nils (1998), Artificial Intelligence: A New Synthesis, Morgan Kaufmann Publishers, ISBN 978-1-55860-467-4
* Russell, Stuart J. & Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (2nd ed.), Upper Saddle River, NJ: Prentice Hall, ISBN 0-13-790395-2,
* Poole, David; Mackworth, Alan & Goebel, Randy (1998), Computational Intelligence: A Logical Approach, Oxford University Press,

[edit] Other sources

* ACM, (Association of Computing Machinery) (1998), ACM Computing Classification System: Artificial intelligence,
* Brooks, Rodney (1990), "Elephants Don't Play Chess", Robotics and Autonomous Systems 6: 3-15, . Retrieved on 30 August 2007
* Buchanan, Bruce G. (2005), "A (Very) Brief History of Artificial Intelligence", AI Magazine: 53-60, . Retrieved on 30 August 2007
* Crevier, Daniel (1993), AI: The Tumultuous Search for Artificial Intelligence, New York, NY: BasicBooks, ISBN 0-465-02997-3
* Haugeland, John (1985), Artificial Intelligence: The Very Idea, Cambridge, Mass.: MIT Press, ISBN 0-262-08153-9.
* Hawkins, Jeff & Blakeslee, Sandra (2004), On Intelligence, New York, NY: Owl Books, ISBN 0-8050-7853-3.
* Kahneman, Daniel; Slovic, D. & Tversky, Amos (1982), Judgment under uncertainty: Heuristics and biases, New York: Cambridge University Press.
* Kurzweil, Ray (1999), The Age of Spiritual Machines, Penguin Books, ISBN 0-670-88217-8
* Kurzweil, Ray (2005), The Singularity is Near, Penguin Books, ISBN 0-670-03384-7
* Lakoff, George & Núñez, Rafael E. (2000), Where Mathematics Comes From: How the Embodied Mind Brings Mathematics into Being, Basic Books, ISBN 0-465-03771-2.
* Lenat, Douglas (1989), Building Large Knowledge-Based Systems, Addison-Wesley
* Lighthill, Professor Sir James (1973), "Artificial Intelligence: A General Survey", Artificial Intelligence: a paper symposium, Science Research Council
* McCarthy, John; Minsky, Marvin & Rochester, Nathan et al. (1955), A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence, .
* McCarthy, John & Hayes, P. J. (1969), "Some philosophical problems from the standpoint of artificial intelligence", Machine Intelligence 4: 463-502,
* McCorduck, Pamela (2004), Machines Who Think (2nd ed.), Natick, MA: A. K. Peters, Ltd., ISBN 1-56881-205-1.
* Minsky, Marvin (1967), Computation: Finite and Infinite Machines, Englewood Cliffs, N.J.: Prentice-Hall
* Minsky, Marvin (2006), The Emotion Machine, New York, NY: Simon & Schusterl, ISBN 0-7432-7663-9
* Moravec, Hans (1976), The Role of Raw Power in Intelligence,
* Moravec, Hans (1988), Mind Children, Harvard University Press
* NRC (1999), "Developments in Artificial Intelligence", Funding a Revolution: Government Support for Computing Research, National Academy Press
* Newell, Allen & Simon, H. A. (1963), "GPS: A Program that Simulates Human Thought", in Feigenbaum, E.A. & Feldman, J., Computers and Thought, McGraw-Hill
* Searle, John (1980), "Minds, Brains and Programs", Behavioral and Brain Sciences 3 (3): 417-457,
* Shapiro, Stuart C. (1992), "Artificial Intelligence", in Shapiro, Stuart C., Encyclopedia of Artificial Intelligence (2nd ed.), New York: John Wiley, pp. 54-57, .
* Simon, H. A. (1965), The Shape of Automation for Men and Management, New York: Harper & Row
* Turing, Alan (October 1950), "Computing machinery and intelligence", Mind LIX (236): 433-460, ISSN 0026-4423, doi:10.1093/mind/LIX.236.433,
* Wason, P. C. (1966), "Reasoning", in Foss, B. M., New horizons in psychology, Harmondsworth: Penguin
* Weizenbaum, Joseph (1976), Computer Power and Human Reason, San Francisco: W.H. Freeman & Company, ISBN 0716704641

[edit] Further reading

* R. Sun & L. Bookman, (eds.), Computational Architectures: Integrating Neural and Symbolic Processes. Kluwer Academic Publishers, Needham, MA. 1994.

[edit] External links
The external links in this article may not comply with Wikipedia's content policies or guidelines.
Please improve this article by removing excessive or inappropriate external links.
Find more about Artificial Intelligence on Wikipedia's sister projects:
Dictionary definitions
Textbooks
Quotations
Source texts
Images and media
News stories
Learning resources

* [[1] The Futurist magazine interviews "Ai chasers" Rodney Brooks, Peter Norvig, Barney Pell, et al.]

* AI at the Open Directory Project
* AI with Neural Networks
* AI-Tools, the Open Source AI community homepage
* Artificial Intelligence Directory, a directory of Web resources related to artificial intelligence
* The Association for the Advancement of Artificial Intelligence
* Freeview Video 'Machines with Minds' by the Vega Science Trust and the BBC/OU
* Heuristics and artificial intelligence in finance and investment
* John McCarthy's frequently asked questions about AI
* Jonathan Edwards looks at AI (BBC audio)
* Artificial Intelligence in the Computer science directory
* Generation5 - Large artificial intelligence portal with articles and news.
* Mindmakers.org, an online organization for people building large scale A.I. systems
* Ray Kurzweil's website dedicated to AI including prediction of future development in AI
* AI articles on the Accelerating Future blog
* AI Genealogy Project
* Artificial intelligence library and other useful links
* International Journal of Computational Intelligence
* International Journal of Intelligent Technology
* AI definitions at Labor Law Talk
* Virtual Humans Forum and Directory

[hide]
v • d • e
Major fields of technology
Applied science Artificial intelligence · Ceramic engineering · Computing technology · Electronics · Energy · Energy storage · Engineering physics · Environmental technology · Materials science and engineering · Microtechnology · Nanotechnology · Nuclear technology · Optics · Zoography
Information Communication · Graphics · Music technology · Speech recognition · Visual technology
Industry Construction · Financial engineering · Manufacturing · Machinery · Mining · Business Informatics
Military Ammunition · Bombs · Guns · Military technology and equipment · Naval engineering
Domestic Educational technology · Domestic appliances · Domestic technology · Food technology
Engineering Aerospace · Agricultural · Architectural · Biological · Biochemical · Biomedical · Ceramic · Chemical · Civil · Computer · Construction · Cryogenic · Electrical · Electronic · Environmental · Food · Industrial · Materials · Mechanical · Mechatronics · Metallurgical · Mining · Naval · Nuclear · Optical · Petroleum · Software · Structural · Systems · Textile · Tissue · Transport
Health and safety Biomedical engineering · Bioinformatics · Biotechnology · Cheminformatics · Fire protection engineering · Health technologies · Pharmaceuticals · Safety engineering · Sanitary engineering
Transport Aerospace · Aerospace engineering · Marine engineering · Motor vehicles · Space technology
Retrieved from "http://en.wikipedia.org/wiki/Artificial_intelligence"
Categories: Artificial intelligence | Cybernetics | Intelligence by type | History of technology | Technology in society
Hidden category: Wikipedia external links cleanup
Views

* Article
* Discussion
* Edit this page
* History

Personal tools

* Log in / create account

Navigation

* Main Page
* Contents
* Featured content
* Current events
* Random article

Interaction

* About Wikipedia
* Community portal
* Recent changes
* Contact Wikipedia
* Donate to Wikipedia
* Help

Search
Toolbox

* What links here
* Related changes
* Upload file
* Special pages
* Printable version
* Permanent link
* Cite this page

Languages

* العربية
* বাংলা
* Bân-lâm-gú
* Bosanski
* Български
* Català
* Česky
* Dansk
* Deutsch
* Eesti
* Ελληνικά
* Español
* Esperanto
* Euskara
* فارسی
* Français
* Galego
* 한국어
* हिन्दी
* Hrvatski
* Ido
* Bahasa Indonesia
* Interlingua
* Íslenska
* Italiano
* עברית
* Latviešu
* Lietuvių
* Lojban
* Magyar
* मराठी
* Bahasa Melayu
* Nederlands
* 日本語
* ‪Norsk (bokmål)‬
* ‪Norsk (nynorsk)‬
* Polski
* Português
* Ripoarisch
* Română
* Русский
* Simple English
* Slovenčina
* Slovenščina
* Српски / Srpski
* Srpskohrvatski / Српскохрватски
* Suomi
* Svenska
* ไทย
* Tiếng Việt
* Türkçe
* Türkmen
* Українська
* 中文

Powered by MediaWiki
Wikimedia Foundation

* This page was last modified on 7 March 2008, at 19:25.
* All text is available under the terms of the GNU Free Documentation License. (See Copyrights for details.)
Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a U.S. registered 501(c)(3) tax-deductible nonprofit charity.
* Privacy policy
* About Wikipedia
* Disclaimers

On "Intelligence Amplification" - As follows:
Intelligence amplification (IA) (also referred to as cognitive augmentation and machine augmented intelligence) refers to the effective use of information technology in augmenting human intelligence. The theory was developed in the 1950s and 1960s by cybernetics and early computer pioneers.
Contents
[hide]

* 1 Major Contributions
o 1.1 William Ross Ashby: Intelligence Amplification
o 1.2 J.C.R. Licklider: Man-Computer Symbiosis
o 1.3 Douglas Engelbart: Augmenting Human Intellect
* 2 Further reading
* 3 See also
* 4 External links

[edit] Major Contributions

[edit] William Ross Ashby: Intelligence Amplification

The term intelligence amplification (IA) has enjoyed a wide currency since William Ross Ashby wrote of "amplifying intelligence" in his Introduction to Cybernetics (1956) and related ideas were explicitly proposed as an alternative to Artificial Intelligence by Hao Wang from the early days of automatic theorem provers.

.."problem solving" is largely, perhaps entirely, a matter of appropriate selection. Take, for instance, any popular book of problems and puzzles. Almost every one can be reduced to the form: out of a certain set, indicate one element. ... It is, in fact, difficult to think of a problem, either playful or serious, that does not ultimately require an appropriate selection as necessary and sufficient for its solution. It is also clear that many of the tests used for measuring "intelligence" are scored essentially according to the candidate's power of appropriate selection. ... Thus it is not impossible that what is commonly referred to as "intellectual power" may be equivalent to "power of appropriate selection". Indeed, if a talking Black Box were to show high power of appropriate selection in such matters — so that, when given difficult problems it persistently gave correct answers — we could hardly deny that it was showing the 'behavioral' equivalent of "high intelligence". If this is so, and as we know that power of selection can be amplified, it seems to follow that intellectual power, like physical power, can be amplified. Let no one say that it cannot be done, for the gene-patterns do it every time they form a brain that grows up to be something better than the gene-pattern could have specified in detail. What is new is that we can now do it synthetically, consciously, deliberately.

Ashby, W.R., An Introduction to Cybernetics, Chapman and Hall, London, UK, 1956. Reprinted, Methuen and Company, London, UK, 1964. PDF

[edit] J.C.R. Licklider: Man-Computer Symbiosis

"Man-Computer Symbiosis" is a key speculative paper published in 1960 by psychologist/computer scientist J.C.R. Licklider, which envisions that mutually-interdependent, "living together", tightly-coupled human brains and computing machines would prove to complement each other's strengths to a high degree:

"Man-computer symbiosis is a subclass of man-machine systems. There are many man-machine systems. At present, however, there are no man-computer symbioses. The purposes of this paper are to present the concept and, hopefully, to foster the development of man-computer symbiosis by analyzing some problems of interaction between men and computing machines, calling attention to applicable principles of man-machine engineering, and pointing out a few questions to which research answers are needed. The hope is that, in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today."

Licklider, J.C.R., "Man-Computer Symbiosis", IRE Transactions on Human Factors in Electronics, vol. HFE-1, 4-11, Mar 1960. Eprint

In Licklider's vision, many of the pure artificial intelligence systems envisioned at the time by over-optimistic researchers would prove unnecessary. (This paper is also seen by some historians as marking the genesis of ideas about computer networks which later blossomed into the Internet).

[edit] Douglas Engelbart: Augmenting Human Intellect

Licklider's research was similar in spirit to his DARPA contemporary and protégé Douglas Engelbart; both had a view of how computers could be used that was both at odds with the then-prevalent views (which saw them as devices principally useful for computations), and key proponents of the way in which computers are now used (as generic adjuncts to humans).

Engelbart reasoned that the state of our current technology controls our ability to manipulate information, and that fact in turn will control our ability to develop new, improved technologies. He thus set himself to the revolutionary task of developing computer-based technologies for manipulating information directly, and also to improve individual and group processes for knowledge-work. Engelbart's philosophy and research agenda is most clearly and directly expressed in the 1962 research report which Engelbart refers to as his 'bible': Augmenting Human Intellect: A Conceptual Framework. The concept of network augmented intelligence is attributed to Engelbart based on this pioneering work.

"Increasing the capability of a man to approach a complex problem situation, to gain comprehension to suit his particular needs, and to derive solutions to problems. Increased capability in this respect is taken to mean a mixture of the following: more-rapid comprehension, better comprehension, the possibility of gaining a useful degree of comprehension in a situation that previously was too complex, speedier solutions, better solutions, and the possibility of finding solutions to problems that before seemed insolvable. And by complex situations we include the professional problems of diplomats, executives, social scientists, life scientists, physical scientists, attorneys, designers--whether the problem situation exists for twenty minutes or twenty years. We do not speak of isolated clever tricks that help in particular situations. We refer to a way of life in an integrated domain where hunches, cut-and-try, intangibles, and the human feel for a situation usefully co-exist with powerful concepts, streamlined terminology and notation, sophisticated methods, and high-powered electronic aids."

Engelbart, D.C., "Augmenting Human Intellect: A Conceptual Framework", Summary Report AFOSR-3233, Stanford Research Institute, Menlo Park, CA, Oct 1962. Eprint

[edit] Further reading

* Asaro, Peter (2008). "From Mechanisms of Adaptation to Intelligence Amplifiers: The Philosophy of W. Ross Ashby," in Michael Wheeler, Philip Husbands and Owen Holland (eds.) The Mechanical Mind in History, Cambridge, MA: MIT Press.

* Ashby, W.R., Design for a Brain, Chapman and Hall, London, UK, 1952. Second edition, Chapman and Hall, London, UK, 1966.

* Skagestad, Peter, "Thinking with Machines: Intelligence Augmentation, Evolutionary Epistemology, and Semiotic", Journal of Social and Evolutionary Systems, vol. 16, no. 2, pp. 157-180, 1993. Eprint

* Smart Business Networks (or, Let's Create 'Life' from Inert Information) on SSRN

* Waldrop, M. Mitchell, The Dream Machine: J.C.R. Licklider and the Revolution That Made Computing Personal, Viking Press, New York, NY, 2001. Licklider's biography, contains discussion of the importance of this paper.

[edit] See also

* Ashby, William Ross
* Cybernetics
* Brain-computer interface
* Cyborg
* Engelbart, Douglas
* Human enhancement
* Sensemaking



* Licklider, J.C.R.
* Peirce, Charles Sanders
* Flexyx Neurotherapy System
* Symbiotic intelligence
* Wisdom of crowds
* Mechanization
* Knowledge worker

[edit] External links

* Overview of Engelbart's framework at Fleabyte.org
* IT Conversations: Doug Engelbart - Large-Scale Collective IQ
* Applied intelligence amplification
* Intelligence, Amplified

Retrieved from "http://en.wikipedia.org/wiki/Intelligence_amplification"
Categories: History of human-computer interaction | Cybernetics | Biocybernetics
Views

* Article
* Discussion
* Edit this page
* History

Personal tools

* Log in / create account

Navigation

* Main Page
* Contents
* Featured content
* Current events
* Random article

Interaction

* About Wikipedia
* Community portal
* Recent changes
* Contact Wikipedia
* Donate to Wikipedia
* Help

Search
Toolbox

* What links here
* Related changes
* Upload file
* Special pages
* Printable version
* Permanent link
* Cite this page

Languages

* Deutsch
* 日本語
* Русский

Powered by MediaWiki
Wikimedia Foundation

* This page was last modified on 4 March 2008, at 18:25.
* All text is available under the terms of the GNU Free Documentation License. (See Copyrights for details.)
Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a U.S. registered 501(c)(3) tax-deductible nonprofit charity.
* Privacy policy
* About Wikipedia
* Disclaimers

Biotechnology:

Biotechnology is technology based on biology, especially when used in agriculture, food science, and medicine. The United Nations Convention on Biological Diversity defines biotechnology as:[1]

""Biotechnology" means any technological application that uses biological systems, living organisms, or derivatives thereof, to make or modify products or processes for specific use."

Biotechnology is often used to refer to genetic engineering technology of the 21st century, however the term encompasses a wider range and history of procedures for modifying biological organisms according to the needs of humanity, going back to the initial modifications of native plants into improved food crops through artificial selection and hybridization. Bioengineering is the science upon which all Biotechnological applications are based. With the development of new approaches and modern techniques, traditional biotechnology industries are also acquiring new horizons enabling them to improve the quality of their products and increase the productivity of their systems.

Before 1971, the term, biotechnology, was primarily used in the food processing and agriculture industries. Since the 1970s, it began to be used by the Western scientific establishment to refer to laboratory-based techniques being developed in biological research, such as recombinant DNA or tissue culture-based processes, or horizontal gene transfer in living plants, using vectors such as the Agrobacterium bacteria to transfer DNA into a host organism. In fact, the term should be used in a much broader sense to describe the whole range of methods, both ancient and modern, used to manipulate organic materials to reach the demands of food production. So the term could be defined as, "The application of indigenous and/or scientific knowledge to the management of (parts of) microorganisms, or of cells and tissues of higher organisms, so that these supply goods and services of use to the food industry and its consumers.[2]

Biotechnology combines disciplines like genetics, molecular biology, biochemistry, embryology and cell biology, which are in turn linked to practical disciplines like chemical engineering, information technology, and robotics. Patho-biotechnology describes the exploitation of pathogens or pathogen derived compounds for beneficial effect.
Contents
[hide]

* 1 History
* 2 Applications
o 2.1 Medicine
+ 2.1.1 Pharmacogenomics
+ 2.1.2 Pharmaceutical products
+ 2.1.3 Genetic testing
# 2.1.3.1 Controversial questions
+ 2.1.4 Gene therapy
+ 2.1.5 Human Genome Project
+ 2.1.6 Cloning
+ 2.1.7 Current Research
o 2.2 Agriculture
+ 2.2.1 Improve yield from crops
+ 2.2.2 Reduced vulnerability of crops to environmental stresses
+ 2.2.3 Increased nutritional qualities of food crops
+ 2.2.4 Improved taste, texture or appearance of food
+ 2.2.5 Reduced dependence on fertilizers, pesticides and other agrochemicals
+ 2.2.6 Production of novel substances in crop plants
+ 2.2.7 Criticism
o 2.3 Biological engineering
o 2.4 Bioremediation and Biodegradation
o 2.5 The Medias' Perception of Biotechnology
* 3 Notable researchers and individuals
* 4 See also
* 5 References
* 6 Further reading
* 7 External links

[edit] History
Brewing was an early application of biotechnology
Brewing was an early application of biotechnology

Main article: History of Biotechnology

The most practical use of biotechnology, which is still present today, is the cultivation of plants to produce food suitable to humans. Agriculture has been theorized to have become the dominant way of producing food since the Neolithic Revolution. The processes and methods of agriculture have been refined by other mechanical and biological sciences since its inception. Through early biotechnology farmers were able to select the best suited and highest-yield crops to produce enough food to support a growing population, including Ali. Other uses of biotechnology were required as crops and fields became increasingly large and difficult to maintain. Specific organisms and organism byproducts were used to fertilize, restore nitrogen, and control pests. Throughout the use of agriculture farmers have inadvertently altered the genetics of their crops through introducing them to new environments and breeding them with other plants--one of the first forms of biotechnology. Cultures such as those in Mesopotamia, Egypt, and Iran developed the process of brewing beer. It is still done by the same basic method of using malted grains (containing enzymes) to convert starch from grains into sugar and then adding specific yeasts to produce beer. In this process the carbohydrates in the grains were broken down into alcohols such as ethanol. Later other cultures produced the process of Lactic acid fermentation which allowed the fermentation and preservation of other forms of food. Fermentation was also used in this time period to produce leavened bread. Although the process of fermentation was not fully understood until Louis Pasteur’s work in 1857, it is still the first use of biotechnology to convert a food source into another form.

Combinations of plants and other organisms were used as medications in many early civilizations. Since as early as 200 BC, people began to use disabled or minute amounts of infectious agents to immunize themselves against infections. These and similar processes have been refined in modern medicine and have led to many developments such as antibiotics, vaccines, and other methods of fighting sickness.

In the early twentieth century scientists gained a greater understanding of microbiology and explored ways of manufacturing specific products. In 1917, Chaim Weizmann first used a pure microbiological culture in an industrial process, that of manufacturing corn starch using Clostridium acetobutylicum to produce acetone, which the United Kingdom desperately needed to manufacture explosives during World War I.[3]

The field of modern biotechnology is thought to have largely begun on June 16, 1980, when the United States Supreme Court ruled that a genetically-modified microorganism could be patented in the case of Diamond v. Chakrabarty.[4] Indian-born Ananda Chakrabarty, working for General Electric, had developed a bacterium (derived from the Pseudomonas genus) capable of breaking down crude oil, which he proposed to use in treating oil spills. A university in Florida is now studying ways to prevent tooth decay. They altered the bacteria in the tooth called Streptococcus mutans by stripping it down so it could not produce lactic acid.

[edit] Applications

Biotechnology has applications in four major industrial areas, including health care (medical), crop production and agriculture, non food (industrial) uses of crops and other products (e.g. biodegradable plastics, vegetable oil, biofuels), and environmental uses.

For example, one application of biotechnology is the directed use of organisms for the manufacture of organic products (examples include beer and milk products). Another example is using naturally present bacteria by the mining industry in bioleaching. Biotechnology is also used to recycle, treat waste, clean up sites contaminated by industrial activities (bioremediation), and also to produce biological weapons.

A series of derived terms have been coined to identify several branches of biotechnology, for example:

* Red biotechnology is applied to medical processes. Some examples are the designing of organisms to produce antibiotics, and the engineering of genetic cures through genomic manipulation.

A rose plant that began as cells grown in a tissue culture
A rose plant that began as cells grown in a tissue culture

* Green biotechnology is biotechnology applied to agricultural processes. An example would be the selection and domestication of plants via micropropagation. Another example is the designing of transgenic plants to grow under specific environmental conditions or in the presence (or absence) of certain agricultural chemicals. One hope is that green biotechnology might produce more environmentally friendly solutions than traditional industrial agriculture. An example of this is the engineering of a plant to express a pesticide, thereby eliminating the need for external application of pesticides. An example of this would be Bt corn. Whether or not green biotechnology products such as this are ultimately more environmentally friendly is a topic of considerable debate.

* White biotechnology , also known as industrial biotechnology, is biotechnology applied to industrial processes. An example is the designing of an organism to produce a useful chemical. Another example is the using of enzymes as industrial catalysts to either produce valuable chemicals or destroy hazardous/polluting chemicals (examples using oxidoreductases are given in Feng Xu (2005) “Applications of oxidoreductases: Recent progress” Ind. Biotechnol. 1, 38-50 [1]). White biotechnology tends to consume less in resources than traditional processes used to produce industrial goods.

* Blue biotechnology is a term that has been used to describe the marine and aquatic applications of biotechnology, but its use is relatively rare.

* The investments and economic output of all of these types of applied biotechnologies form what has been described as the bioeconomy.

* Bioinformatics is an interdisciplinary field which addresses biological problems using computational techniques, and makes the rapid organization and analysis of biological data possible. The field may also be referred to as computational biology, and can be defined as, "conceptualizing biology in terms of molecules and then applying informatics techniques to understand and organize the information associated with these molecules, on a large scale."[5] Bioinformatics plays a key role in various areas, such as functional genomics, structural genomics, and proteomics, and forms a key component in the biotechnology and pharmaceutical sector.

[edit] Medicine

In medicine, modern biotechnology finds promising applications in such areas as

* pharmacogenomics;
* drug production;
* genetic testing; and
* gene therapy.

[edit] Pharmacogenomics
DNA Microarray chip -- Some can do as many as a million blood tests at once
DNA Microarray chip -- Some can do as many as a million blood tests at once

Main article: Pharmacogenomics

Pharmacogenomics is the study of how the genetic inheritance of an individual affects his/her body’s response to drugs. It is a coined word derived from the words “pharmacology” and “genomics”. It is hence the study of the relationship between pharmaceuticals and genetics. The vision of pharmacogenomics is to be able to design and produce drugs that are adapted to each person’s genetic makeup.[6]

Pharmacogenomics results in the following benefits:[6]

1. Development of tailor-made medicines. Using pharmacogenomics, pharmaceutical companies can create drugs based on the proteins, enzymes and RNA molecules that are associated with specific genes and diseases. These tailor-made drugs promise not only to maximize therapeutic effects but also to decrease damage to nearby healthy cells.

2. More accurate methods of determining appropriate drug dosages. Knowing a patient’s genetics will enable doctors to determine how well his/ her body can process and metabolize a medicine. This will maximize the value of the medicine and decrease the likelihood of overdose.

3. Improvements in the drug discovery and approval process. The discovery of potential therapies will be made easier using genome targets. Genes have been associated with numerous diseases and disorders. With modern biotechnology, these genes can be used as targets for the development of effective new therapies, which could significantly shorten the drug discovery process.

4. Better vaccines. Safer vaccines can be designed and produced by organisms transformed by means of genetic engineering. These vaccines will elicit the immune response without the attendant risks of infection. They will be inexpensive, stable, easy to store, and capable of being engineered to carry several strains of pathogen at once.

[edit] Pharmaceutical products
Computer-generated image of insulin hexamers highlighting the threefold symmetry, the zinc ions holding it together, and the histidine residues involved in zinc binding.
Computer-generated image of insulin hexamers highlighting the threefold symmetry, the zinc ions holding it together, and the histidine residues involved in zinc binding.

Most traditional pharmaceutical drugs are relatively simple molecules that have been found primarily through trial and error to treat the symptoms of a disease or illness. Biopharmaceuticals are large biological molecules known as proteins and these usually (but not always, as is the case with using insulin to treat type 1 diabetes mellitus) target the underlying mechanisms and pathways of a malady; it is a relatively young industry. They can deal with targets in humans that may not be accessible with traditional medicines. A patient typically is dosed with a small molecule via a tablet while a large molecule is typically injected.

Small molecules are manufactured by chemistry but large molecules are created by living cells such as those found in the human body: for example, bacteria cells, yeast cells, animal or plant cells.

Modern biotechnology is often associated with the use of genetically altered microorganisms such as E. coli or yeast for the production of substances like synthetic insulin or antibiotics. It can also refer to transgenic animals or transgenic plants, such as Bt corn. Genetically altered mammalian cells, such as Chinese Hamster Ovary (CHO) cells, are also used to manufacture certain pharmaceuticals. Another promising new biotechnology application is the development of plant-made pharmaceuticals.

Biotechnology is also commonly associated with landmark breakthroughs in new medical therapies to treat hepatitis B, hepatitis C, cancers, arthritis, haemophilia, bone fractures, multiple sclerosis, and cardiovascular disorders. The biotechnology industry has also been instrumental in developing molecular diagnostic devices than can be used to define the target patient population for a given biopharmaceutical. Herceptin, for example, was the first drug approved for use with a matching diagnostic test and is used to treat breast cancer in women whose cancer cells express the protein HER2.

Modern biotechnology can be used to manufacture existing medicines relatively easily and cheaply. The first genetically engineered products were medicines designed to treat human diseases. To cite one example, in 1978 Genentech developed synthetic humanized insulin by joining its gene with a plasmid vector inserted into the bacterium Escherichia coli. Insulin, widely used for the treatment of diabetes, was previously extracted from the pancreas of cattle and/or pigs. The resulting genetically engineered bacterium enabled the production of vast quantities of synthetic human insulin at low cost.[7]

Since then modern biotechnology has made it possible to produce more easily and cheaply human growth hormone, clotting factors for hemophiliacs, fertility drugs, erythropoietin and other drugs.[8] Most drugs today are based on about 500 molecular targets. Genomic knowledge of the genes involved in diseases, disease pathways, and drug-response sites are expected to lead to the discovery of thousands more new targets.[8]

[edit] Genetic testing
Gel electrophoresis
Gel electrophoresis

Genetic testing involves the direct examination of the DNA molecule itself. A scientist scans a patient’s DNA sample for mutated sequences.

There are two major types of gene tests. In the first type, a researcher may design short pieces of DNA (“probes”) whose sequences are complementary to the mutated sequences. These probes will seek their complement among the base pairs of an individual’s genome. If the mutated sequence is present in the patient’s genome, the probe will bind to it and flag the mutation. In the second type, a researcher may conduct the gene test by comparing the sequence of DNA bases in a patient’s gene to disease in healthy individuals or their progeny.

Genetic testing is now used for:

* Determining sex
* Carrier screening, or the identification of unaffected individuals who carry one copy of a gene for a disease that requires two copies for the disease to manifest
* Prenatal diagnostic screening
* Newborn screening
* Presymptomatic testing for predicting adult-onset disorders
* Presymptomatic testing for estimating the risk of developing adult-onset cancers
* Confirmational diagnosis of symptomatic individuals
* Forensic/identity testing

Some genetic tests are already available, although most of them are used in developed countries. The tests currently available can detect mutations associated with rare genetic disorders like cystic fibrosis, sickle cell anemia, and Huntington’s disease. Recently, tests have been developed to detect mutation for a handful of more complex conditions such as breast, ovarian, and colon cancers. However, gene tests may not detect every mutation associated with a particular condition because many are as yet undiscovered, and the ones they do detect may present different risks to different people and populations.[8]

[edit] Controversial questions
The bacterium E. coli is routinely genetically engineered.
The bacterium E. coli is routinely genetically engineered.

Several issues have been raised regarding the use of genetic testing:

1. Absence of cure. There is still a lack of effective treatment or preventive measures for many diseases and conditions now being diagnosed or predicted using gene tests. Thus, revealing information about risk of a future disease that has no existing cure presents an ethical dilemma for medical practitioners.

2. Ownership and control of genetic information. Who will own and control genetic information, or information about genes, gene products, or inherited characteristics derived from an individual or a group of people like indigenous communities? At the macro level, there is a possibility of a genetic divide, with developing countries that do not have access to medical applications of biotechnology being deprived of benefits accruing from products derived from genes obtained from their own people. Moreover, genetic information can pose a risk for minority population groups as it can lead to group stigmatization.

At the individual level, the absence of privacy and anti-discrimination legal protections in most countries can lead to discrimination in employment or insurance or other misuse of personal genetic information. This raises questions such as whether genetic privacy is different from medical privacy.[9]

3. Reproductive issues. These include the use of genetic information in reproductive decision-making and the possibility of genetically altering reproductive cells that may be passed on to future generations. For example, germline therapy forever changes the genetic make-up of an individual’s descendants. Thus, any error in technology or judgment may have far-reaching consequences. Ethical issues like designer babies and human cloning have also given rise to controversies between and among scientists and bioethicists, especially in the light of past abuses with eugenics.

4. Clinical issues. These center on the capabilities and limitations of doctors and other health-service providers, people identified with genetic conditions, and the general public in dealing with genetic information.

5. Effects on social institutions. Genetic tests reveal information about individuals and their families. Thus, test results can affect the dynamics within social institutions, particularly the family.

6. Conceptual and philosophical implications regarding human responsibility, free will vis-à-vis genetic determinism, and the concepts of health and disease.

[edit] Gene therapy

Main article: Gene therapy

Gene therapy using an Adenovirus vector. A new gene is inserted into an adenovirus vector, which is used to introduce the modified DNA into a human cell. If the treatment is successful, the new gene will make a functional protein.
Gene therapy using an Adenovirus vector. A new gene is inserted into an adenovirus vector, which is used to introduce the modified DNA into a human cell. If the treatment is successful, the new gene will make a functional protein.

Gene therapy may be used for treating, or even curing, genetic and acquired diseases like cancer and AIDS by using normal genes to supplement or replace defective genes or to bolster a normal function such as immunity. It can be used to target somatic (i.e., body) or germ (i.e., egg and sperm) cells. In somatic gene therapy, the genome of the recipient is changed, but this change is not passed along to the next generation. In contrast, in germline gene therapy, the egg and sperm cells of the parents are changed for the purpose of passing on the changes to their offspring.

There are basically two ways of implementing a gene therapy treatment:

1. Ex vivo, which means “outside the body” – Cells from the patient’s blood or bone marrow are removed and grown in the laboratory. They are then exposed to a virus carrying the desired gene. The virus enters the cells, and the desired gene becomes part of the DNA of the cells. The cells are allowed to grow in the laboratory before being returned to the patient by injection into a vein.

2. In vivo, which means “inside the body” – No cells are removed from the patient’s body. Instead, vectors are used to deliver the desired gene to cells in the patient’s body.

Currently, the use of gene therapy is limited. Somatic gene therapy is primarily at the experimental stage. Germline therapy is the subject of much discussion but it is not being actively investigated in larger animals and human beings.

As of June 2001, more than 500 clinical gene-therapy trials involving about 3,500 patients have been identified worldwide. Around 78% of these are in the United States, with Europe having 18%. These trials focus on various types of cancer, although other multigenic diseases are being studied as well. Recently, two children born with severe combined immunodeficiency disorder (“SCID”) were reported to have been cured after being given genetically engineered cells.

Gene therapy faces many obstacles before it can become a practical approach for treating disease.[10] At least four of these obstacles are as follows:

1. Gene delivery tools. Genes are inserted into the body using gene carriers called vectors. The most common vectors now are viruses, which have evolved a way of encapsulating and delivering their genes to human cells in a pathogenic manner. Scientists manipulate the genome of the virus by removing the disease-causing genes and inserting the therapeutic genes. However, while viruses are effective, they can introduce problems like toxicity, immune and inflammatory responses, and gene control and targeting issues.

2. Limited knowledge of the functions of genes. Scientists currently know the functions of only a few genes. Hence, gene therapy can address only some genes that cause a particular disease. Worse, it is not known exactly whether genes have more than one function, which creates uncertainty as to whether replacing such genes is indeed desirable.

3. Multigene disorders and effect of environment. Most genetic disorders involve more than one gene. Moreover, most diseases involve the interaction of several genes and the environment. For example, many people with cancer not only inherit the disease gene for the disorder, but may have also failed to inherit specific tumor suppressor genes. Diet, exercise, smoking and other environmental factors may have also contributed to their disease.

4. High costs. Since gene therapy is relatively new and at an experimental stage, it is an expensive treatment to undertake. This explains why current studies are focused on illnesses commonly found in developed countries, where more people can afford to pay for treatment. It may take decades before developing countries can take advantage of this technology.

[edit] Human Genome Project
DNA Replication image from the Human Genome Project (HGP)
DNA Replication image from the Human Genome Project (HGP)

The Human Genome Project is an initiative of the U.S. Department of Energy (“DOE”) that aims to generate a high-quality reference sequence for the entire human genome and identify all the human genes.

The DOE and its predecessor agencies were assigned by the U.S. Congress to develop new energy resources and technologies and to pursue a deeper understanding of potential health and environmental risks posed by their production and use. In 1986, the DOE announced its Human Genome Initiative. Shortly thereafter, the DOE and National Institutes of Health developed a plan for a joint Human Genome Project (“HGP”), which officially began in 1990.

The HGP was originally planned to last 15 years. However, rapid technological advances and worldwide participation accelerated the completion date to 2003 (making it a 13 year project). Already it has enabled gene hunters to pinpoint genes associated with more than 30 disorders.[11]

[edit] Cloning

Cloning involves the removal of the nucleus from one cell and its placement in an unfertilized egg cell whose nucleus has either been deactivated or removed.

There are two types of cloning:

1. Reproductive cloning. After a few divisions, the egg cell is placed into a uterus where it is allowed to develop into a fetus that is genetically identical to the donor of the original nucleus.

2. Therapeutic cloning.[12] The egg is placed into a Petri dish where it develops into embryonic stem cells, which have shown potentials for treating several ailments.[13]

In February 1997, cloning became the focus of media attention when Ian Wilmut and his colleagues at the Roslin Institute announced the successful cloning of a sheep, named Dolly, from the mammary glands of an adult female. The cloning of Dolly made it apparent to many that the techniques used to produce her could someday be used to clone human beings.[14] This stirred a lot of controversy because of its ethical implications.

[edit] Current Research

In January 2008, Christopher S. Chen made an exciting discovery that could potentially alter the future of medicine. He found that cell signaling that is normally biochemically regulated could be simulated with magnetic nanoparticles attached to a cell surface. The discovery of Donald Ingber, Robert Mannix, and Sanjay Kumar, who found that a nanobead can be attached to a monovalent ligand, and that these compounds can bind to Mast cells without triggering the clustering response, inspired Chen’s research. Usually, when a multivalent ligand attaches to the cell’s receptors, the signal pathway is activated. However, these nanobeads only initiated cell signaling when a magnetic field was applied to the area, thereby causing the nanobeads to cluster. It is important to note that this clustering triggered the cellular response, not merely the force applied to the cell due to the receptor binding. This experiment was carried out several times with time-varying activation cycles. However, there is no reason to suggest that the response time could not be reduced to seconds or even milliseconds. This low response time has exciting applications in the medical field. Currently it takes minutes or hours for a pharmaceutical to affect its environment, and when it does so, the changes are irreversible. With the current research in mind, though, a future of millisecond response times and reversible effects is possible. Imagine being able to treat various allergic responses, colds, and other such ailments almost instantaneously. This future has not yet arrived, however, and further research and testing must be done in this area, but this is an important step in the right direction.[15]

[edit] Agriculture

[edit] Improve yield from crops

Using the techniques of modern biotechnology, one or two genes may be transferred to a highly developed crop variety to impart a new character that would increase its yield (30). However, while increases in crop yield are the most obvious applications of modern biotechnology in agriculture, it is also the most difficult one. Current genetic engineering techniques work best for effects that are controlled by a single gene. Many of the genetic characteristics associated with yield (e.g., enhanced growth) are controlled by a large number of genes, each of which has a minimal effect on the overall yield (31). There is, therefore, much scientific work to be done in this area.

[edit] Reduced vulnerability of crops to environmental stresses

Crops containing genes that will enable them to withstand biotic and abiotic stresses may be developed. For example, drought and excessively salty soil are two important limiting factors in crop productivity. Biotechnologists are studying plants that can cope with these extreme conditions in the hope of finding the genes that enable them to do so and eventually transferring these genes to the more desirable crops. One of the latest developments is the identification of a plant gene, At-DBF2, from thale cress, a tiny weed that is often used for plant research because it is very easy to grow and its genetic code is well mapped out. When this gene was inserted into tomato and tobacco cells, the cells were able to withstand environmental stresses like salt, drought, cold and heat, far more than ordinary cells. If these preliminary results prove successful in larger trials, then At-DBF2 genes can help in engineering crops that can better withstand harsh environments (32). Researchers have also created transgenic rice plants that are resistant to rice yellow mottle virus (RYMV). In Africa, this virus destroys majority of the rice crops and makes the surviving plants more susceptible to fungal infections (33). BIOTECHNOLOGY

[edit] Increased nutritional qualities of food crops

Proteins in foods may be modified to increase their nutritional qualities. Proteins in legumes and cereals may be transformed to provide the amino acids needed by human beings for a balanced diet (34). A good example is the work of Professors Ingo Potrykus and Peter Beyer on the so-called Goldenrice™(discussed below).

[edit] Improved taste, texture or appearance of food

Modern biotechnology can be used to slow down the process of spoilage so that fruit can ripen longer on the plant and then be transported to the consumer with a still reasonable shelf life. This improves the taste, texture and appearance of the fruit. More importantly, it could expand the market for farmers in developing countries due to the reduction in spoilage.

The first genetically modified food product was a tomato which was transformed to delay its ripening (35). Researchers in Indonesia, Malaysia, Thailand, Philippines and Vietnam are currently working on delayed-ripening papaya in collaboration with the University of Nottingham and Zeneca (36).

Biotechnology in cheeze production[16]: enzymes produced by micro-organisms provide an alternative to animal rennet – a cheese coagulant - and a more reliable supply for cheese makers. This also eliminates possible public concerns with animal derived material. Enzymes offer an animal friendly alternative to animal rennet. While providing constant quality, they are also less expensive.

About 85 million tons of wheat flour is used every year to bake bread[17]. By adding an enzyme called maltogenic amylase to the flour, bread stays fresher longer. Assuming that 10-15% of bread is thrown away, if it could just stay fresh another 5-7 days then 2 million tons of flour per year would be saved. That corresponds to 40% of the bread consumed in a country such as the USA. This means more bread becomes available with no increase in input. In combination with other enzymes, bread can also be made bigger, more appetizing and better in a range of ways.

[edit] Reduced dependence on fertilizers, pesticides and other agrochemicals

Most of the current commercial applications of modern biotechnology in agriculture are on reducing the dependence of farmers on agrochemicals. For example, Bacillus thuringiensis (Bt) is a soil bacterium that produces a protein with insecticidal qualities. Traditionally, a fermentation process has been used to produce an insecticidal spray from these bacteria. In this form, the Bt toxin occurs as an inactive protoxin, which requires digestion by an insect to be effective. There are several Bt toxins and each one is specific to certain target insects. Crop plants have now been engineered to contain and express the genes for Bt toxin, which they produce in its active form. When a susceptible insect ingests the transgenic crop cultivar expressing the Bt protein, it stops feeding and soon thereafter dies as a result of the Bt toxin binding to its gut wall. Bt corn is now commercially available in a number of countries to control corn borer (a lepidopteran insect), which is otherwise controlled by spraying (a more difficult process).

Crops have also been genetically engineered to acquire tolerance to broad-spectrum herbicide. The lack of cost-effective herbicides with broad-spectrum activity and no crop injury was a consistent limitation in crop weed management. Multiple applications of numerous herbicides were routinely used to control a wide range of weed species detrimental to agronomic crops. Weed management tended to rely on preemergence — that is, herbicide applications were sprayed in response to expected weed infestations rather than in response to actual weeds present. Mechanical cultivation and hand weeding were often necessary to control weeds not controlled by herbicide applications. The introduction of herbicide tolerant crops has the potential of reducing the number of herbicide active ingredients used for weed management, reducing the number of herbicide applications made during a season, and increasing yield due to improved weed management and less crop injury. Transgenic crops that express tolerance to glyphosphate, glufosinate and bromoxynil have been developed. These herbicides can now be sprayed on transgenic crops without inflicting damage on the crops while killing nearby weeds (37).

From 1996 to 2001, herbicide tolerance was the most dominant trait introduced to commercially available transgenic crops, followed by insect resistance. In 2001, herbicide tolerance deployed in soybean, corn and cotton accounted for 77% of the 626,000 square kilometres planted to transgenic crops; Bt crops accounted for 15%; and "stacked genes" for herbicide tolerance and insect resistance used in both cotton and corn accounted for 8% (38).

[edit] Production of novel substances in crop plants

Biotechnology is being applied for novel uses other than food. For example, oilseed can be modified to produce fatty acids for detergents, substitute fuels and petrochemicals.[citation needed] Potato, tomato, rice, and other plants have been genetically engineered to produce insulin[citation needed] and certain vaccines. If future clinical trials prove successful, the advantages of edible vaccines would be enormous, especially for developing countries. The transgenic plants may be grown locally and cheaply. Homegrown vaccines would also avoid logistical and economic problems posed by having to transport traditional preparations over long distances and keeping them cold while in transit. And since they are edible, they will not need syringes, which are not only an additional expense in the traditional vaccine preparations but also a source of infections if contaminated.[18] In the case of insulin grown in transgenic plants, it might not be administered as an edible protein, but it could be produced at significantly lower cost than insulin produced in costly, bioreactors.[citation needed]

[edit] Criticism

There is another side to the agricultural biotechnology issue however. It includes increased herbicide usage and resultant herbicide resistance, "super weeds," residues on and in food crops, genetic contamination of non-GM crops which hurt organic and conventional farmers, damage to wildlife from glyphosate, etc.[2][3]

[edit] Biological engineering

Main article: Bioengineering

Biotechnological engineering or biological engineering is a branch of engineering that focuses on biotechnologies and biological science. It includes different disciplines such as biochemical engineering, biomedical engineering, bio-process engineering, biosystem engineering and so on. Because of the novelty of the field, the definition of a bioengineer is still undefined. However, in general it is an integrated approach of fundamental biological sciences and traditional engineering principles.

Bioengineers are often employed to scale up bio processes from the laboratory scale to the manufacturing scale. Moreover, as with most engineers, they often deal with management, economic and legal issues. Since patents and regulation (e.g. FDA regulation in the U.S.) are very important issues for biotech enterprises, bioengineers are often required to have knowledge related to these issues.

The increasing number of biotech enterprises is likely to create a need for bioengineers in the years to come. Many universities throughout the world are now providing programs in bioengineering and biotechnology (as independent programs or specialty programs within more established engineering fields).

[edit] Bioremediation and Biodegradation

Main article: Microbial biodegradation

Biotechnology is being used to engineer and adapt organisms especially microorganisms in an effort to find sustainable ways to clean up contaminated environments. The elimination of a wide range of pollutants and wastes from the environment is an absolute requirement to promote a sustainable development of our society with low environmental impact. Biological processes play a major role in the removal of contaminants and biotechnology is taking advantage of the astonishing catabolic versatility of microorganisms to degrade/convert such compounds. New methodological breakthroughs in sequencing, genomics, proteomics, bioinformatics and imaging are producing vast amounts of information. In the field of Environmental Microbiology, genome-based global studies open a new era providing unprecedented in silico views of metabolic and regulatory networks, as well as clues to the evolution of degradation pathways and to the molecular adaptation strategies to changing environmental conditions. Functional genomic and metagenomic approaches are increasing our understanding of the relative importance of different pathways and regulatory networks to carbon flux in particular environments and for particular compounds and they will certainly accelerate the development of bioremediation technologies and biotransformation processes.[19]

Marine environments are especially vulnerable since oil spills of coastal regions and the open sea are poorly containable and mitigation is difficult. In addition to pollution through human activities, millions of tons of petroleum enter the marine environment every year from natural seepages. Despite its toxicity, a considerable fraction of petroleum oil entering marine systems is eliminated by the hydrocarbon-degrading activities of microbial communities, in particular by a remarkable recently discovered group of specialists, the so-called hydrocarbonoclastic bacteria (HCB).[20]

[edit] The Medias' Perception of Biotechnology

There are various TV series, films, and documentaries with biotechnological themes; Surface, X-Files, The Island, I Am Legend, Torchwood, Horizon. Most of which convey the endless possiblities of how the technology can go wrong, and the consequences of this.

The majority of newspapers also show pessimistic viewpoints to stem cell research, genetic engineering and the like. Some[attribution needed] would describe the Medias' overarching reaction to biotechnology as simple misunderstanding and fright.[citation needed] While there are legitimate concerns of the overwhelming power this technology may bring, most condemnations of the technology are a result of religious beliefs.[citation needed]

[edit] Notable researchers and individuals

* Canada : Frederick Banting, Lap-Chee Tsui, Tak Wah Mak, Lorne Babiuk
* Europe : Paul Nurse, Jacques Monod, Francis Crick
* Finland : Leena Palotie
* Iceland : Kari Stefansson
* India : Kiran Mazumdar-Shaw (Biocon)
* Ireland : Timothy O'Brien, Dermot P Kelleher
* Mexico : Francisco Bolívar Zapata, Luis Herrera-Estrella
* U.S. : David Botstein, Craig Venter, Sydney Brenner, Eric Lander, Leroy Hood, Robert Langer, James J. Collins, Henry I. Miller, Roger Beachy, Herbert Boyer, Michael West, Thomas Okarma, James D. Watson

[edit] See also
Biotechnology Portal

* Bioeconomy
* Biomimetics
* Biotechnology industrial park
* Green Revolution
* List of biotechnology articles
* List of biotechnology companies
* List of emerging technologies
* Pharmaceutical company
* EuropaBio

[edit] References

1. ^ "The Convention on Biological Diversity (Article 2. Use of Terms)." United Nations. 1992. Retrieved on February 6, 2008.
2. ^ Bunders, J.; Haverkort, W.; Hiemstra, W. "Biotechnology: Building on Farmer's Knowledge." 1996, Macmillan Education, Ltd. ISBN 0333670825
3. ^ Springham, D.; Springham, G.; Moses, V.; Cape, R.E. "Biotechnology: The Science and the Business." Published 1999, Taylor & Francis. p. 1. ISBN 9057024071
4. ^ "Diamond v. Chakrabarty, 447 U.S. 303 (1980). No. 79-139." United States Supreme Court. June 16, 1980. Retrieved on May 4, 2007.
5. ^ Gerstein, M. "Bioinformatics Introduction." Yale University. Retrieved on May 8, 2007.
6. ^ a b U.S. Department of Energy Human Genome Program, supra note 6.
7. ^ W. Bains, Genetic Engineering For Almost Everybody: What Does It Do? What Will It Do? (London: Penguin Books, 1987), 99.
8. ^ a b c U.S. Department of State International Information Programs, “Frequently Asked Questions About Biotechnology”, USIS Online; available from http://usinfo.state.gov/ei/economic_issues/biotechnology/biotech_faq.html, accessed 13 Sept 2007. Cf. C. Feldbaum, “Some History Should Be Repeated”, 295 Science, 8 February 2002, 975.
9. ^ The National Action Plan on Breast Cancer and U.S. National Institutes of Health-Department of Energy Working Group on the Ethical, Legal and Social Implications (ELSI) have issued several recommendations to prevent workplace and insurance discrimination. The highlights of these recommendations, which may be taken into account in developing legislation to prevent genetic discrimination, may be found at http://www.ornl.gov/hgmis/ elsi/legislat.html.
10. ^ Ibid
11. ^ U.S. Department of Energy Human Genome Program, supra note 6
12. ^ A number of scientists have called for the use the term “nuclear transplantation,” instead of “therapeutic cloning,” to help reduce public confusion. The term “cloning” has become synonymous with “somatic cell nuclear transfer,” a procedure that can be used for a variety of purposes, only one of which involves an intention to create a clone of an organism. They believe that the term “cloning” is best associated with the ultimate outcome or objective of the research and not the mechanism or technique used to achieve that objective. They argue that the goal of creating a nearly identical genetic copy of a human being is consistent with the term “human reproductive cloning,” but the goal of creating stem cells for regenerative medicine is not consistent with the term “therapeutic cloning.” The objective of the latter is to make tissue that is genetically compatible with that of the recipient, not to create a copy of the potential tissue recipient. Hence, “therapeutic cloning” is conceptually inaccurate. B. Vogelstein, B. Alberts, and K. Shine, “Please Don’t Call It Cloning!”, Science (15 February 2002), 1237
13. ^ D. Cameron, “Stop the Cloning”, Technology Review, 23 May 2002’. Also available from http://www.techreview.com. [hereafter “Cameron”]
14. ^ M.C. Nussbaum and C.R. Sunstein, Clones And Clones: Facts And Fantasies About Human Cloning (New York: W.W. Norton & Co., 1998), 11. However, there is wide disagreement within scientific circles whether human cloning can be successfully carried out. For instance, Dr. Rudolf Jaenisch of Whitehead Institute for Biomedical Research believes that reproductive cloning shortcuts basic biological processes, thus making normal offspring impossible to produce. In normal fertilization, the egg and sperm go through a long process of maturation. Cloning shortcuts this process by trying to reprogram the nucleus of one whole genome in minutes or hours. This results in gross physical malformations to subtle neurological disturbances. Cameron, supra note 30
15. ^ Chen, Christopher. Nature Nanotech. 13-14 (2008)
16. ^ EuropaBio - An animal friendly alternative for cheeze makers
17. ^ EuropaBio - Biologically better bread
18. ^ Pascual DW (2007). "Vaccines are for dinner". Proc Natl Acad Sci U S A 104 (26): 10757–8. doi:10.1073/pnas.0704516104. PMID 17581867. 
19. ^ Diaz E (editor). (2008). Microbial Biodegradation: Genomics and Molecular Biology, 1st ed., Caister Academic Press. ISBN 978-1-904455-17-2.
20. ^ Martins VAP et al (2008). "Genomic Insights into Oil Biodegradation in Marine Systems", Microbial Biodegradation: Genomics and Molecular Biology. Caister Academic Press. ISBN 978-1-904455-17-2.

[edit] Further reading

* Friedman, Y. Building Biotechnology: Starting, Managing, and Understanding Biotechnology Companies. ISBN 978-0973467635.
* Oliver, Richard W. The Coming Biotech Age. ISBN 0-07-135020-9.
* Zaid, A; H.G. Hughes, E. Porceddu, F. Nicholas (2001). Glossary of Biotechnology for Food and Agriculture - A Revised and Augmented Edition of the Glossary of Biotechnology and Genetic Engineering. Available in English, French, Spanish and Arabic. Rome: FAO. ISBN 92-5-104683-2.

[edit] External links
Wikibooks
Wikibooks has a book on the topic of
Genes, Technology and Policy
Wikiversity
At Wikiversity you can learn more and teach others about Biotechnology at:
The Department of Biotechnology

* A report on Agricultural Biotechnology focusing on the impacts of "Green" Biotechnology with a special emphasis on economic aspects


[hide]
v • d • e
Major fields of technology
Applied science Artificial intelligence · Ceramic engineering · Computing technology · Electronics · Energy · Energy storage · Engineering physics · Environmental technology · Materials science and engineering · Microtechnology · Nanotechnology · Nuclear technology · Optics · Zoography
Information Communication · Graphics · Music technology · Speech recognition · Visual technology
Industry Construction · Financial engineering · Manufacturing · Machinery · Mining · Business Informatics
Military Ammunition · Bombs · Guns · Military technology and equipment · Naval engineering
Domestic Educational technology · Domestic appliances · Domestic technology · Food technology
Engineering Aerospace · Agricultural · Architectural · Biological · Biochemical · Biomedical · Ceramic · Chemical · Civil · Computer · Construction · Cryogenic · Electrical · Electronic · Environmental · Food · Industrial · Materials · Mechanical · Mechatronics · Metallurgical · Mining · Naval · Nuclear · Optical · Petroleum · Software · Structural · Systems · Textile · Tissue · Transport
Health and safety Biomedical engineering · Bioinformatics · Biotechnology · Cheminformatics · Fire protection engineering · Health technologies · Pharmaceuticals · Safety engineering · Sanitary engineering
Transport Aerospace · Aerospace engineering · Marine engineering · Motor vehicles · Space technology
Retrieved from "http://en.wikipedia.org/wiki/Biotechnology"
Categories: Biotechnology
Hidden categories: All articles with unsourced statements | Articles with unsourced statements since January 2008 | Articles with specifically-marked weasel-worded phrases | Articles with unsourced statements since March
Nanotechnology:

Nanotechnology refers broadly to a field of applied science and technology whose unifying theme is the control of matter on the atomic and molecular scale, normally 1 to 100 nanometers, and the fabrication of devices with critical dimensions that lie within that size range.
Contents
[hide]

* 1 Overview
* 2 Origins
* 3 Fundamental concepts
o 3.1 Larger to smaller: a materials perspective
o 3.2 Simple to complex: a molecular perspective
o 3.3 Molecular nanotechnology: a long-term view
* 4 Current research
o 4.1 Nanomaterials
o 4.2 Bottom-up approaches
o 4.3 Top-down approaches
o 4.4 Functional approaches
o 4.5 Speculative
* 5 Tools and techniques
* 6 Applications
o 6.1 Cancer
o 6.2 Other
* 7 Implications
* 8 References
* 9 See also
* 10 Further reading
* 11 External links

[edit] Overview

It is a highly multidisciplinary field, drawing from fields such as applied physics, materials science, interface and colloid science, device physics, supramolecular chemistry (which refers to the area of chemistry that focuses on the noncovalent bonding interactions of molecules), self-replicating machines and robotics, chemical engineering, mechanical engineering, biological engineering, and electrical engineering. Much speculation exists as to what may result from these lines of research. Nanotechnology can be seen as an extension of existing sciences into the nanoscale, or as a recasting of existing sciences using a newer, more modern term.

Two main approaches are used in nanotechnology. In the "bottom-up" approach, materials and devices are built from molecular components which assemble themselves chemically by principles of molecular recognition. In the "top-down" approach, nano-objects are constructed from larger entities without atomic-level control. The impetus for nanotechnology comes from a renewed interest in Interface and Colloid Science, coupled with a new generation of analytical tools such as the atomic force microscope (AFM), and the scanning tunneling microscope (STM). Combined with refined processes such as electron beam lithography and molecular beam epitaxy, these instruments allow the deliberate manipulation of nanostructures, and led to the observation of novel phenomena.

Examples of nanotechnology in modern use are the manufacture of polymers based on molecular structure, and the design of computer chip layouts based on surface science. Despite the great promise of numerous nanotechnologies such as quantum dots and nanotubes, real commercial applications have mainly used the advantages of colloidal nanoparticles in bulk form, such as suntan lotion, cosmetics, protective coatings, drug delivery,[1] and stain resistant clothing.

[edit] Origins
Buckminsterfullerene C60, also known as the buckyball, is the simplest of the carbon structures known as fullerenes. Members of the fullerene family are a major subject of research falling under the nanotechnology umbrella.
Buckminsterfullerene C60, also known as the buckyball, is the simplest of the carbon structures known as fullerenes. Members of the fullerene family are a major subject of research falling under the nanotechnology umbrella.

Main article: History of nanotechnology

The first use of the concepts in 'nano-technology' (but predating use of that name) was in "There's Plenty of Room at the Bottom," a talk given by physicist Richard Feynman at an American Physical Society meeting at Caltech on December 29, 1959. Feynman described a process by which the ability to manipulate individual atoms and molecules might be developed, using one set of precise tools to build and operate another proportionally smaller set, so on down to the needed scale. In the course of this, he noted, scaling issues would arise from the changing magnitude of various physical phenomena: gravity would become less important, surface tension and Van der Waals attraction would become more important, etc. This basic idea appears plausible, and exponential assembly enhances it with parallelism to produce a useful quantity of end products. The term "nanotechnology" was defined by Tokyo Science University Professor Norio Taniguchi in a 1974 paper (N. Taniguchi, "On the Basic Concept of 'Nano-Technology'," Proc. Intl. Conf. Prod. London, Part II, British Society of Precision Engineering, 1974.) as follows: "'Nano-technology' mainly consists of the processing of, separation, consolidation, and deformation of materials by one atom or by one molecule." In the 1980s the basic idea of this definition was explored in much more depth by Dr. K. Eric Drexler, who promoted the technological significance of nano-scale phenomena and devices through speeches and the books Engines of Creation: The Coming Era of Nanotechnology (1986) and Nanosystems: Molecular Machinery, Manufacturing, and Computation,[2] and so the term acquired its current sense. Nanotechnology and nanoscience got started in the early 1980s with two major developments; the birth of cluster science and the invention of the scanning tunneling microscope (STM). This development led to the discovery of fullerenes in 1986 and carbon nanotubes a few years later. In another development, the synthesis and properties of semiconductor nanocrystals was studied; This led to a fast increasing number of metal oxide nanoparticles of quantum dots. The atomic force microscope was invented six years after the STM was invented.

[edit] Fundamental concepts

One nanometer (nm) is one billionth, or 10-9 of a meter. For comparison, typical carbon-carbon bond lengths, or the spacing between these atoms in a molecule, are in the range .12-.15 nm, and a DNA double-helix has a diameter around 2 nm. On the other hand, the smallest cellular lifeforms, the bacteria of the genus Mycoplasma, are around 200 nm in length. To put that scale in to context the comparative size of a nanometer to a meter is the same as that of a marble to the size of the earth.[3] Or another way of putting it: a nanometer is the amount a man's beard grows in the time it takes him to raise the razor to his face.[3]

[edit] Larger to smaller: a materials perspective
Image of reconstruction on a clean Au(100) surface, as visualized using scanning tunneling microscopy. The individual atoms composing the surface are visible.
Image of reconstruction on a clean Au(100) surface, as visualized using scanning tunneling microscopy. The individual atoms composing the surface are visible.

Main article: Nanomaterials

A number of physical phenomena become noticeably pronounced as the size of the system decreases. These include statistical mechanical effects, as well as quantum mechanical effects, for example the “quantum size effect” where the electronic properties of solids are altered with great reductions in particle size. This effect does not come into play by going from macro to micro dimensions. However, it becomes dominant when the nanometer size range is reached. Additionally, a number of physical (mechanical, electrical, optical, etc.) properties change when compared to macroscopic systems. One example is the increase in surface area to volume ratio altering mechanical, thermal and catalytic properties of materials. Novel mechanical properties of nanosystems are of interest in the nanomechanics research. The catalytic activity of nanomaterials also opens potential risks in their interaction with biomaterials.

Materials reduced to the nanoscale can suddenly show very different properties compared to what they exhibit on a macroscale, enabling unique applications. For instance, opaque substances become transparent (copper); inert materials become catalysts (platinum); stable materials turn combustible (aluminum); solids turn into liquids at room temperature (gold); insulators become conductors (silicon). A material such as gold, which is chemically inert at normal scales, can serve as a potent chemical catalyst at nanoscales. Much of the fascination with nanotechnology stems from these unique quantum and surface phenomena that matter exhibits at the nanoscale.

[edit] Simple to complex: a molecular perspective

Main article: Molecular self-assembly

Modern synthetic chemistry has reached the point where it is possible to prepare small molecules to almost any structure. These methods are used today to produce a wide variety of useful chemicals such as pharmaceuticals or commercial polymers. This ability raises the question of extending this kind of control to the next-larger level, seeking methods to assemble these single molecules into supramolecular assemblies consisting of many molecules arranged in a well defined manner.

These approaches utilize the concepts of molecular self-assembly and/or supramolecular chemistry to automatically arrange themselves into some useful conformation through a bottom-up approach. The concept of molecular recognition is especially important: molecules can be designed so that a specific conformation or arrangement is favored due to non-covalent intermolecular forces. The Watson-Crick basepairing rules are a direct result of this, as is the specificity of an enzyme being targeted to a single substrate, or the specific folding of the protein itself. Thus, two or more components can be designed to be complementary and mutually attractive so that they make a more complex and useful whole.

Such bottom-up approaches should, broadly speaking, be able to produce devices in parallel and much cheaper than top-down methods, but could potentially be overwhelmed as the size and complexity of the desired assembly increases. Most useful structures require complex and thermodynamically unlikely arrangements of atoms. Nevertheless, there are many examples of self-assembly based on molecular recognition in biology, most notably Watson-Crick basepairing and enzyme-substrate interactions. The challenge for nanotechnology is whether these principles can be used to engineer novel constructs in addition to natural ones.

[edit] Molecular nanotechnology: a long-term view

Main article: Molecular nanotechnology

Molecular nanotechnology, sometimes called molecular manufacturing, is a term given to the concept of engineered nanosystems (nanoscale machines) operating on the molecular scale. It is especially associated with the concept of a molecular assembler, a machine that can produce a desired structure or device atom-by-atom using the principles of mechanosynthesis. Manufacturing in the context of productive nanosystems is not related to, and should be clearly distinguished from, the conventional technologies used to manufacture nanomaterials such as carbon nanotubes and nanoparticles.

When the term "nanotechnology" was independently coined and popularized by Eric Drexler (who at the time was unaware of an earlier usage by Norio Taniguchi) it referred to a future manufacturing technology based on molecular machine systems. The premise was that molecular-scale biological analogies of traditional machine components demonstrated molecular machines were possible: by the countless examples found in biology, it is known that sophisticated, stochastically optimised biological machines can be produced.

It is hoped that developments in nanotechnology will make possible their construction by some other means, perhaps using biomimetic principles. However, Drexler and other researchers[4] have proposed that advanced nanotechnology, although perhaps initially implemented by biomimetic means, ultimately could be based on mechanical engineering principles, namely, a manufacturing technology based on the mechanical functionality of these components (such as gears, bearings, motors, and structural members) that would enable programmable, positional assembly to atomic specification (PNAS-1981). The physics and engineering performance of exemplar designs were analyzed in Drexler's book Nanosystems.

But Drexler's analysis is very qualitative and does not address very pressing issues, such as the "fat fingers" and "Sticky fingers" problems. In general it is very difficult to assemble devices on the atomic scale, as all one has to position atoms are other atoms of comparable size and stickyness. Another view, put forth by Carlo Montemagno],[5] is that future nanosystems will be hybrids of silicon technology and biological molecular machines. Yet another view, put forward by the late Richard Smalley, is that mechanosynthesis is impossible due to the difficulties in mechanically manipulating individual molecules.

This led to an exchange of letters in the ACS publication Chemical & Engineering News in 2003.[6] Though biology clearly demonstrates that molecular machine systems are possible, non-biological molecular machines are today only in their infancy. Leaders in research on non-biological molecular machines are Dr. Alex Zettl and his colleagues at Lawrence Berkeley Laboratories and UC Berkeley. They have constructed at least three distinct molecular devices whose motion is controlled from the desktop with changing voltage: a nanotube nanomotor, a molecular actuator, and a nanoelectromechanical relaxation oscillator.

An experiment indicating that positional molecular assembly is possible was performed by Ho and Lee at Cornell University in 1999. They used a scanning tunneling microscope to move an individual carbon monoxide molecule (CO) to an individual iron atom (Fe) sitting on a flat silver crystal, and chemically bound the CO to the Fe by applying a voltage.

[edit] Current research
Space-filling model of the nanocar on a surface, using fullerenes as wheels.
Space-filling model of the nanocar on a surface, using fullerenes as wheels.
Graphical representation of a rotaxane, useful as a molecular switch.
Graphical representation of a rotaxane, useful as a molecular switch.
This device transfers energy from nano-thin layers of quantum wells to nanocrystals above them, causing the nanocrystals to emit visible light.
This device transfers energy from nano-thin layers of quantum wells to nanocrystals above them, causing the nanocrystals to emit visible light.[7]

[edit] Nanomaterials

This includes subfields which develop or study materials having unique properties arising from their nanoscale dimensions.[8]

* Interface and Colloid Science has given rise to many materials which may be useful in nanotechnology, such as carbon nanotubes and other fullerenes, and various nanoparticles and nanorods.
* Nanoscale materials can also be used for bulk applications; most present commercial applications of nanotechnology are of this flavor.
* Progress has been made in using these materials for medical applications; see Nanomedicine.

[edit] Bottom-up approaches

These seek to arrange smaller components into more complex assemblies.

* DNA nanotechnology utilizes the specificity of Watson-Crick basepairing to construct well-defined structures out of DNA and other nucleic acids.
* Approaches from the field of "classical" chemical synthesis also aim at designing molecules with well-defined shape (e.g. bis-peptides[9]).
* More generally, molecular self-assembly seeks to use concepts of supramolecular chemistry, and molecular recognition in particular, to cause single-molecule components to automatically arrange themselves into some useful conformation.

[edit] Top-down approaches

These seek to create smaller devices by using larger ones to direct their assembly.

* Many technologies descended from conventional solid-state silicon methods for fabricating microprocessors are now capable of creating features smaller than 100 nm, falling under the definition of nanotechnology. Giant magnetoresistance-based hard drives already on the market fit this description,[10] as do atomic layer deposition (ALD) techniques. Peter Grünberg and Albert Fert received Nobel Prize in Physics for their discovery of Giant magnetoresistance and contributions to the field of spintronics in 2007.[11]

* Solid-state techniques can also be used to create devices known as nanoelectromechanical systems or NEMS, which are related to microelectromechanical systems or MEMS.
* Atomic force microscope tips can be used as a nanoscale "write head" to deposit a chemical upon a surface in a desired pattern in a process called dip pen nanolithography. This fits into the larger subfield of nanolithography.

[edit] Functional approaches

These seek to develop components of a desired functionality without regard to how they might be assembled.

* Molecular electronics seeks to develop molecules with useful electronic properties. These could then be used as single-molecule components in a nanoelectronic device.[12] For an example see rotaxane.
* Synthetic chemical methods can also be used to create synthetic molecular motors, such as in a so-called nanocar.

[edit] Speculative

These subfields seek to anticipate what inventions nanotechnology might yield, or attempt to propose an agenda along which inquiry might progress. These often take a big-picture view of nanotechnology, with more emphasis on its societal implications than the details of how such inventions could actually be created.

* Molecular nanotechnology is a proposed approach which involves manipulating single molecules in finely controlled, deterministic ways. This is more theoretical than the other subfields and is beyond current capabilities.
* Nanorobotics centers on self-sufficient machines of some functionality operating at the nanoscale. There are hopes for applying nanorobots in medicine[13][14][15], but it may not be easy to do such a thing because of several drawbacks of such devices.[16] Nevertheless, progress on innovative materials and methodologies has been demonstrated with some patents granted about new nanomanufacturing devices for future commercial applications, which also progressively helps in the development towards nanorobots with the use of embedded nanobioelectronics concept.[17][18]
* Programmable matter based on artificial atoms seeks to design materials whose properties can be easily and reversibly externally controlled.
* Due to the popularity and media exposure of the term nanotechnology, the words picotechnology and femtotechnology have been coined in analogy to it, although these are only used rarely and informally.

[edit] Tools and techniques
Typical AFM setup. A microfabricated cantilever with a sharp tip is deflected by features on a sample surface, much like in a phonograph but on a much smaller scale. A laser beam reflects off the backside of the cantilever into a set of photodetectors, allowing the deflection to be measured and assembled into an image of the surface.
Typical AFM setup. A microfabricated cantilever with a sharp tip is deflected by features on a sample surface, much like in a phonograph but on a much smaller scale. A laser beam reflects off the backside of the cantilever into a set of photodetectors, allowing the deflection to be measured and assembled into an image of the surface.

The first observations and size measurements of nano-particles was made during the first decade of the 20th century. They are mostly associated with the name of Zsigmondy who made detail study of gold sols and other nanomaterials with sizes down to 10 nm and less. He published a book in 1914.[19] He used ultramicroscope that employes dark field method for seeing particles with sizes much less than light wavelength.

There are traditional techniques developed during 20th century in Interface and Colloid Science for characterizing nanomaterials. These are widely used for first generation passive nanomaterials specified in the next section.

These methods include several different techniques for characterizing particle size distribution. This characterization is imperative because many materials that are expected to be nano-sized are actually aggregated in solutions. Some of methods are based on light scattering. Other apply ultrasound, such as ultrasound attenuation spectroscopy for testing concentrated nano-dispersions and microemulsions.[20]

There is also a group of traditional techniques for characterizing surface charge or zeta potential of nano-particles in solutions. These information is required for proper system stabilzation, preventing its aggregation or flocculation. These methods include microelectrophoresis, electrophoretic light scattering and electroacoustics. The last one, for instance colloid vibration current method is suitable for characterizing concentrated systems.

Next group of nanotechnological techniques include those used for fabrication of nanowires, those used in semiconductor fabrication such as deep ultraviolet lithography, electron beam lithography, focused ion beam machining, nanoimprint lithography, atomic layer deposition, and molecular vapor deposition, and further including molecular self-assembly techniques such as those employing di-block copolymers. However, all of these techniques preceded the nanotech era, and are extensions in the development of scientific advancements rather than techniques which were devised with the sole purpose of creating nanotechnology and which were results of nanotechnology research.

There are several important modern developments. The atomic force microscope (AFM) and the Scanning Tunneling Microscope (STM) are two early versions of scanning probes that launched nanotechnology. There are other types of scanning probe microscopy, all flowing from the ideas of the scanning confocal microscope developed by Marvin Minsky in 1961 and the scanning acoustic microscope (SAM) developed by Calvin Quate and coworkers in the 1970s, that made it possible to see structures at the nanoscale. The tip of a scanning probe can also be used to manipulate nanostructures (a process called positional assembly). Feature-oriented scanning-positioning methodology suggested by Rostislav Lapshin appears to be a promising way to implement these nanomanipulations in automatic mode. However, this is still a slow process because of low scanning velocity of the microscope. Various techniques of nanolithography such as dip pen nanolithography, electron beam lithography or nanoimprint lithography were also developed. Lithography is a top-down fabrication technique where a bulk material is reduced in size to nanoscale pattern.

The top-down approach anticipates nanodevices that must be built piece by piece in stages, much as manufactured items are currently made. Scanning probe microscopy is an important technique both for characterization and synthesis of nanomaterials. Atomic force microscopes and scanning tunneling microscopes can be used to look at surfaces and to move atoms around. By designing different tips for these microscopes, they can be used for carving out structures on surfaces and to help guide self-assembling structures. By using, for example, feature-oriented scanning-positioning approach, atoms can be moved around on a surface with scanning probe microscopy techniques. At present, it is expensive and time-consuming for mass production but very suitable for laboratory experimentation.

In contrast, bottom-up techniques build or grow larger structures atom by atom or molecule by molecule. These techniques include chemical synthesis, self-assembly and positional assembly. Another variation of the bottom-up approach is molecular beam epitaxy or MBE. Researchers at Bell Telephone Laboratories like John R. Arthur. Alfred Y. Cho, and Art C. Gossard developed and implemented MBE as a research tool in the late 1960s and 1970s. Samples made by MBE were key to the discovery of the fractional quantum Hall effect for which the 1998 Nobel Prize in Physics was awarded. MBE allows scientists to lay down atomically-precise layers of atoms and, in the process, build up complex structures. Important for research on semiconductors, MBE is also widely used to make samples and devices for the newly emerging field of spintronics.

Newer techniques such as Dual Polarisation Interferometry are enabling scientists to measure quantitatively the molecular interactions that take place at the nano-scale.

[edit] Applications

Main article: List of nanotechnology applications

[edit] Cancer

The small size of nanoparticles endows them with properties that can be very useful in oncology, particularly in imaging. Quantum dots (nanoparticles with quantum confinement properties, such as size-tunable light emission), when used in conjunction with MRI (magnetic resonance imaging), can produce exceptional images of tumor sites. These nanoparticles are much brighter than organic dyes and only need one light source for excitation. This means that the use of fluorescent quantum dots could produce a higher contrast image and at a lower cost than today's organic dyes. Another nanoproperty, high surface area to volume ratio, allows many functional groups to be attached to a nanoparticle, which can seek out and bind to certain tumor cells. Additionally, the small size of nanoparticles (10 to 100 nanometers), allows them to preferentially accumulate at tumor sites (because tumors lack an effective lymphatic drainage system). A very exciting research question is how to make these imaging nanoparticles do more things for cancer. For instance, is it possible to manufacture multifunctional nanoparticles that would detect, image, and then proceed to treat a tumor? This question is currently under vigorous investigation; the answer to which could shape the future of cancer treatment.[21]

[edit] Other

Although there has been much hype about the potential applications of nanotechnology, most current commercialized applications are limited to the use of "first generation" passive nanomaterials. These include titanium dioxide nanoparticles in sunscreen, cosmetics and some food products; silver nanoparticles in food packaging, clothing, disinfectants and household appliances; zinc oxide nanoparticles in sunscreens and cosmetics, surface coatings, paints and outdoor furniture varnishes; and cerium oxide nanoparticles as a fuel catalyst. The Woodrow Wilson Center for International Scholars' Project on Emerging Nanotechnologies hosts an online inventory of consumer products which now contain nanomaterials.[22]

However further applications which require actual manipulation or arrangement of nanoscale components await further research. Though technologies currently branded with the term 'nano' are sometimes little related to and fall far short of the most ambitious and transformative technological goals of the sort in molecular manufacturing proposals, the term still connotes such ideas. Thus there may be a danger that a "nano bubble" will form, or is forming already, from the use of the term by scientists and entrepreneurs to garner funding, regardless of interest in the transformative possibilities of more ambitious and far-sighted work.

The National Science Foundation (a major source of funding for nanotechnology in the United States) funded researcher David Berube to study the field of nanotechnology. His findings are published in the monograph “Nano-Hype: The Truth Behind the Nanotechnology Buzz". This published study (with a foreword by Mihail Roco, Senior Advisor for Nanotechnology at the National Science Foundation) concludes that much of what is sold as “nanotechnology” is in fact a recasting of straightforward materials science, which is leading to a “nanotech industry built solely on selling nanotubes, nanowires, and the like” which will “end up with a few suppliers selling low margin products in huge volumes."

Another large and beneficial outcome of nanotechnology is the production of potable water through the means of nanofiltration. Where much of the developing world lacks access to reliable water sources, nanotechnology may alleviate these issues upon further testing as have been performed in countries, such as South Africa. It is important that solute levels in water sources are maintained and reached to provide necessary nutrients to people. And in turn, further testing would be pertinent so as to measure for any signs of nanotoxicology and any negative affects to any and all biological creatures.[23]

In 1999, the ultimate CMOS transistor developed at the Laboratory for Economics and Information Technology in Grenoble, France, tested the limits of the principles of the MOSFET transistor with a diameter of 18 nm (approximately 70 atoms placed side by side). This was almost 10 times smaller than the smallest industrial transistor in 2003 (130 nm in 2003, 90 nm in 2004 and 65 nm in 2005). It enabled the theoretical integration of seven billion junctions on a €1 coin. However, the CMOS transistor, which was created in 1999, was not a simple research experiment to study how CMOS technology functions, but rather a demonstration of how this technology functions now that we ourselves are getting ever closer to working on a molecular scale. Today it would be impossible to master the coordinated assembly of a large number of these transistors on a circuit and it would also be impossible to create this on an industrial level.[24]

[edit] Implications

Main article: Implications of nanotechnology

Due to the far-ranging claims that have been made about potential applications of nanotechnology, a number of concerns have been raised about what effects these will have on our society if realized, and what action if any is appropriate to mitigate these risks.

One area of concern is the effect that industrial-scale manufacturing and use of nanomaterials would have on human health and the environment, as suggested by nanotoxicology research. Groups such as the Center for Responsible Nanotechnology have advocated that nanotechnology should be specially regulated by governments for these reasons. Others counter that overregulation would stifle scientific research and the development of innovations which could greatly benefit mankind.

Other experts, including director of the Woodrow Wilson Center's Project on Emerging Nanotechnologies David Rejeski, have testified[25] that successful commercialization depends on adequate oversight, risk research strategy, and public engagement. More recently local municipalities have passed (Berkeley, CA) or are considering (Cambridge, MA) - ordinances requiring nanomaterial manufacturers to disclose the known risks of their products.

Longer-term concerns center on the implications that new technologies will have for society at large, and whether these could possibly lead to either a post scarcity economy, or alternatively exacerbate the wealth gap between developed and developing nations.

[edit] References

1. ^ Abdelwahed W, Degobert G, Stainmesse S, Fessi H, (2006). "Freeze-drying of nanoparticles: Formulation, process and storage considerations". Advanced Drug Delivery Reviews. 58 (15): 1688-1713. 
2. ^ Nanosystems: Molecular Machinery, Manufacturing, and Computation. 2006, ISBN 0-471-57518-6
3. ^ a b Kahn, Jennifer (2006). "Nanotechnology". National Geographic 2006 (June): 98-119. 
4. ^ Nanotechnology: Developing Molecular Manufacturing
5. ^ California NanoSystems Institute
6. ^ C&En: Cover Story - Nanotechnology
7. ^ Wireless nanocrystals efficiently radiate visible light
8. ^ Narayan RJ, Kumta PN, Sfeir C, Lee D-H, Olton D, Choi D. (2004). "Nanostructured Ceramics in Medical Devices: Applications and Prospects.". JOM 56 (10): 38-43. 
9. ^ Levins CG, Schafmeister CE. The synthesis of curved and linear structures from a minimal set of monomers. Journal of Organic Chemistry, 70, p. 9002, 2005. doi:10.1002/chin.200605222
10. ^ Applications/Products. National Nanotechnology Initiative. Retrieved on 2007-10-19.
11. ^ The Nobel Prize in Physics 2007. Nobelprize.org. Retrieved on 2007-10-19.
12. ^ Das S, Gates AJ, Abdu HA, Rose GS, Picconatto CA, Ellenbogen JC. (2007). "Designs for Ultra-Tiny, Special-Purpose Nanoelectronic Circuits.". IEEE Transactions on Circuits and Systems I 54 (11): 2528-2540. 
13. ^ Ghalanbor Z, Marashi SA, Ranjbar B (2005). "Nanotechnology helps medicine: nanoscale swimmers and their future applications". Med Hypotheses 65 (1): 198-199. PMID 15893147. 
14. ^ Kubik T, Bogunia-Kubik K, Sugisaka M. (2005). "Nanotechnology on duty in medical applications". Curr Pharm Biotechnol. 6 (1): 17-33. PMID 15727553. 
15. ^ Leary SP, Liu CY, Apuzzo MLJ. (2006). "Toward the Emergence of Nanoneurosurgery: Part III-Nanomedicine: Targeted Nanotherapy, Nanosurgery, and Progress Toward the Realization of Nanoneurosurgery.". Neurosurgery 58 (6): 1009-1026. 
16. ^ Shetty RC (2005). "Potential pitfalls of nanotechnology in its applications to medicine: immune incompatibility of nanodevices". Med Hypotheses 65 (5): 998-9. PMID 16023299. 
17. ^ Cavalcanti A, Shirinzadeh B, Freitas RA Jr., Kretly LC. (2007). "Medical Nanorobot Architecture Based on Nanobioelectronics". Recent Patents on Nanotechnology. 1 (1): 1-10. 
18. ^ Boukallel M, Gauthier M, Dauge M, Piat E, Abadie J. (2007). "Smart microrobots for mechanical cell characterization and cell convoying.". IEEE Trans. Biomed. Eng. 54 (8): 1536-40. PMID 17694877. 
19. ^ Zsigmondy, R. "Colloids and the Ultramicroscope", J.Wiley and Sons, NY, (1914)
20. ^ Dukhin, A.S. and Goetz, P.J. "Ultrasound for characterizing colloids", Elsevier, 2002
21. ^ Nie, Shuming, Yun Xing, Gloria J. Kim, and Jonathan W. Simmons. "Nanotechnology Applications in Cancer." Annual Review of Biomedical Engineering 9
22. ^ A Nanotechnology Consumer Products Inventory
23. ^ Hillie, Thembela and Mbhuti Hlophe. "Nanotechnology and the challenge of clean water." Nature.com/naturenanotechonolgy. November 2007: Volume 2.
24. ^ Waldner, Jean-Baptiste (2007). Nanocomputers and Swarm Intelligence. ISTE, p26. ISBN 1847040020.
25. ^ Testimony of David Rejeski for U.S. Senate Committee on Commerce, Science and Transportation Project on Emerging Nanotechnologies. Retrieved on 2008-3-7.

[edit] See also

* American National Standards Institute Nanotechnology Panel (ANSI-NSP)
* Energy Applications of Nanotechnology
* IEST
* List of emerging technologies
* List of nanotechnology organizations
* List of nanotechnology topics
* Molecular modelling
* Nanoengineering
* Nanobiotechnology
* Nanofluidics
* Nanoethics
* Nanoscale iron particles
* Nanotechnology education
* Nanotechnology in fiction
* Plug-in hybrid
* Supramolecular chemistry
* Top-down and bottom-up

[edit] Further reading

* Andrew D. Maynard and David Y.H. Pui, Eds. (2007), Nanoparticles and Occupational Health, Journal of Nanoparticle Research, 9:1, February 2007. ISBN 978-1-4020-5858-5
* J. Clarence Davies, EPA and Nanotechnology: Oversight for the 21st Century, Project on Emerging Nanotechnologies, PEN 9, May 2007.
* William Sims Bainbridge: Nanoconvergence: The Unity of Nanoscience, Biotechnology, Information Technology and Cognitive Science, June 27 2007, Prentice Hall, ISBN 0-13-244643-X
* Lynn E. Foster: Nanotechnology: Science, Innovation, and Opportunity, December 21 2005, Prentice Hall, ISBN 0-13-192756-6
* IEST Focuses on Facilities in Nanotechnology Initiative by David Ensor, from Journal of the IEST, October 2006.
* Advancements in Nanotechnology Open Opportunities for Environmental Sciences by Clifford (Bud) Frith, from Journal of the IEST, Volume 48, Number 1 / 2005.
* Advancements in Nanotechnology Open Opportunities for Environmental Sciences by John Weaver, from Journal of the IEST, Volume 48, Number 1 / 2005.
* Nano's Big Future by Jennifer Kahn, from National Geographic, June 2006. [1]
* Impact of Nanotechnology on Biomedical Sciences: Review of Current Concepts on Convergence of Nanotechnology With Biology by Herbert Ernest and Rahul Shetty, from AZojono, May 2005.
* Geoffrey Hunt and Michael Mehta (2006), Nanotechnology: Risk, Ethics and Law. London: Earthscan Books.
* Hari Singh Nalwa (2004), Encyclopedia of Nanoscience and Nanotechnology (10-Volume Set), American Scientific Publishers. ISBN 1-58883-001-2
* Michael Rieth and Wolfram Schommers (2006), Handbook of Theoretical and Computational Nanotechnology (10-Volume Set), American Scientific Publishers. ISBN 1-58883-042-X
* Yuliang Zhao and Hari Singh Nalwa (2007), Nanotoxicology, American Scientific Publishers. ISBN 1-58883-088-8
* Hari Singh Nalwa and Thomas Webster (2007), Cancer Nanotechnology, American Scientific Publishers. ISBN 1-58883-071-3

* David M. Berube 2006. Nano-hype: The Truth Behind the Nanotechnology Buzz. Prometheus Books. ISBN 1-59102-351-3
* Jones, Richard A. L. (2004). Soft Machines. Oxford University Press, Oxford, United Kingdom. ISBN 0198528558.
* Akhlesh Lakhtakia (ed) (2004). The Handbook of Nanotechnology. Nanometer Structures: Theory, Modeling, and Simulation. SPIE Press, Bellingham, WA, USA. ISBN 0-8194-5186-X.
* Fei Wang & Akhlesh Lakhtakia (eds) (2006). Selected Papers on Nanotechnology -- Theory & Modeling (Milestone Volume 182). SPIE Press, Bellingham, WA, USA. ISBN 0-8194-6354-X.

* G. Ali Mansoori, Principles of Nanotechnology, World Scientific Pub. Co., 2005. http://www.worldscibooks.com/nanosci/5749.html
* Roger Smith, Nanotechnology: A Brief Technology Analysis, CTOnet.org, 2004. http://www.ctonet.org/documents/Nanotech_analysis.pdf
* Arius Tolstoshev, Nanotechnology: Assessing the Environmental Risks for Australia, Earth Policy Centre, September 2006. http://www.earthpolicy.org.au/nanotech.pdf
* Friends of the Earth, "Nanotechnology, sunscreens and cosmetics: Small ingredients, big risks", 2006. http://nano.foe.org.au/node/125
* Fritz Allhoff, Patrick Lin, James Moor, and John Weckert (editors) (2007), Nanoethics: The Ethical and Societal Implications of Nanotechnology, John Wiley & Sons, Hoboken, NJ, USA, ISBN 978-0-470-08417-5. http://www.wiley.com/WileyCDA/WileyTitle/productCd-0470084170.html http://www.nanoethics.org/wiley.html
* Kurzweil, Ray. (2001, March). "Promise and Peril - The Deeply Intertwined Poles of 21st Century Technology," Communications of the ACM, Vol. 44, Issue 3, pp. 88-91.

[edit] External links
The external links in this article may not comply with Wikipedia's content policies or guidelines.
Please improve this article by removing excessive or inappropriate external links.

For external links to companies and institutions involved in nanotechnology, please see List of nanotechnology organizations.

Wikimedia Commons has media related to:
Nanotechnology
Wikibooks
Wikibooks has a book on the topic of
Nanotechnology
Wikiversity
At Wikiversity you can learn more and teach others about Nanotechnology at:
The Department of Nanotechnology

* Nanotechnology at the Open Directory Project
* The Ethics and Politics of Nanotechnology - UNESCO Brochure describing the science of nanotechnology and presenting some of the ethical, legal and political issues that face the international community in the near future.
* AACR Cancer Concepts: Nanotechnology - Article from the American Association for Cancer Research
* Capitalizing on Nanotechnolgy's Enormous Promise - Article from CheResources.com
* Research articles in nanotechnology
* Learn about nanotechnology - Article from Null Hypothesis: The Journal of Unlikely Science
* Opportunities and Risks of Nanotechnology - Article from ETH Zurich
* Nanotechnology Research and Technical Data - Article from American Elements Corp.
* UnderstandingNano.com - Nanotechnology portal site
* NanoDetails.com - Nanotechnology portal site
* VIDEO: Using Nanotechnology to Improve Health in Developing Countries February 27, 2007 at the Woodrow Wilson Center
* VIDEO: Nanotechnology Discussion by the BBC and the Vega Science Trust.
* NanoHive@Home - Distributed Computing Project


Robotics:

Robotics is the science and technology of robots, their design, manufacture, and application.[1] Robotics requires a working knowledge of electronics, mechanics and software, and is usually accompanied by a large working knowledge of many subjects.[2] A person working in the field is a roboticist.

Although the appearance and capabilities of robots vary vastly, all robots share the features of a mechanical, movable structure under some form of autonomous control. The structure of a robot is usually mostly mechanical and can be called a kinematic chain (its functionality being akin to the skeleton of the human body). The chain is formed of links (its bones), actuators (its muscles) and joints which can allow one or more degrees of freedom. Most contemporary robots use open serial chains in which each link connects the one before to the one after it. These robots are called serial robots and often resemble the human arm. Some robots, such as the Stewart platform, use closed parallel kinematic chains. Other structures, such as those that mimic the mechanical structure of humans, various animals and insects, are comparatively rare. However, the development and use of such structures in robots is an active area of research (e.g. biomechanics). Robots used as manipulators have an end effector mounted on the last link. This end effector can be anything from a welding device to a mechanical hand used to manipulate the environment.
Contents
[hide]

* 1 Etymology
* 2 Components of robots
o 2.1 Actuation
o 2.2 Manipulation
o 2.3 Locomotion
+ 2.3.1 Rolling Robots
+ 2.3.2 Walking Robots
+ 2.3.3 Other methods of locomotion
o 2.4 Human interaction
* 3 Control
* 4 Dynamics and kinematics
* 5 External links
* 6 References

[edit] Etymology

The word robotics was first used in print by Isaac Asimov, in his science fiction short story "Runaround", published in March 1942 in Astounding Science Fiction.[3] While it was based on the word "robot" coined by science fiction author Karel Čapek, Asimov was unaware that he was coining a new term. The design of electrical devices is called electronics, so the design of robots is called robotics.[4] Before the coining of the term, however, there was interest in ideas similar to robotics (namely automata and androids) dating as far back as the 8th or 7th century BC. In the Iliad, the god Hephaestus made talking handmaidens out of gold.[5] Archytas of Tarentum is credited with creating a mechanical Pigeon in 400 BC.[6] Robots are used in industrial, military, exploration, home making, and academic and research applications.[7]

[edit] Components of robots

[edit] Actuation
A robot leg, powered by Air Muscles.
A robot leg, powered by Air Muscles.

The actuators are the 'muscles' of a robot; the parts which convert stored energy into movement. By far the most popular actuators are electric motors, but there are many others, some of which are powered by electricity, while others use chemicals, or compressed air.

* Motors: By far the vast majority of robots use electric motors, of which there are several kinds. DC motors, which are familiar to many people, spin rapidly when an electric current is passed through them. They will spin backwards if the current is made to flow in the other direction.
* Stepper Motors: As the name suggests, stepper motors do not spin freely like DC motors, they rotate in steps of a few degrees at a time, under the command of a controller. This makes them easier to control, as the controller knows exactly how far they have rotated, without having to use a sensor. Therefore they are used on many robots and CNC machining centres.
* Piezo Motors: A recent alternative to DC motors are piezo motors, also known as ultrasonic motors. These work on a fundamentally different principle, whereby tiny piezoceramic legs, vibrating many thousands of times per second, walk the motor round in a circle or a straight line.[8] The advantages of these motors are incredible nanometre resolution, speed and available force for their size.[9] These motors are already available commercially, and being used on some robots.[10][11]
* Air Muscles: The air muscle is a simple yet powerful device for providing a pulling force. When inflated with compressed air, it contracts by up to 40% of its original length. The key to its behaviour is the braiding visible around the outside, which forces the muscle to be either long and thin, or short and fat. Since it behaves in a very similar way to a biological muscle, it can be used to construct robots with a similar muscle/skeleton system to an animal.[12] For example, the Shadow robot hand uses 40 air muscles to power its 24 joints.
* Electroactive Polymers: These are a class of plastics which change shape in response to electrical stimulation.[13] They can be designed so that they bend, stretch or contract, but so far there are no EAPs suitable for commercial robots, as they tend to have low efficiency or are not robust.[14] Indeed, all of the entrants in a recent competition to build EAP powered arm wrestling robots, were beaten by a 17 year old girl.[15] However, they are expected to improve in the future, where they may be useful for microrobotic applications.[16]

* Elastic nanotubes are a promising, early-stage experimental technology. The absence of defects in nanotubes enables these filaments to deform elastically by several percent, with energy storage levels of perhaps 10J per cu. cm for metal nanotubes. Human biceps could be replaced with an 8mm diameter wire of this material. Such compact "muscle" might allow future robots to outrun and outjump humans. [17]

[edit] Manipulation

Robots which must work in the real world require some way to manipulate objects; pick up, modify, destroy or otherwise have an effect. Thus the 'hands' of a robot are often referred to as end effectors[18], while the arm is referred to as a manipulator.[19] Most robot arms have replacable effectors, each allowing them to perform some small range of tasks. Some have a fixed manipulator which cannot be replaced, while a few have one very general purpose manipulator, for example a humanoid hand.
A simple gripper
A simple gripper

* Grippers: A common effector is the gripper. In its simplest manifestation it consists of just two fingers which can open and close to pick up and let go of a range of small objects. See End effectors [1].
* Vacuum Grippers: Pick and place robots for electronic components and for large objects like car windscreens, will often use very simple vacuum grippers. These are very simple astrictive devices, but can hold very large loads provided the prehension surface is smooth enough to ensure suction.
* General purpose effectors: Some advanced robots are beginning to use fully humanoid hands, like the Shadow Hand (right), or the Schunk hand.[20] These highly dexterous manipulators, with as many as 20 degrees of freedom and hundreds of tactile sensors[21] can be difficult to control. The computer must consider a great deal of information, and decide on the best way to manipulate an object from many possibilities.


For the definitive guide to all forms of robot endeffectors, their design and usage consult the book "Robot Grippers" [22].

[edit] Locomotion

[edit] Rolling Robots
Segway in the Robot museum in Nagoya.
Segway in the Robot museum in Nagoya.

For simplicity, most mobile robots have four wheels. However, some researchers have tried to create more complex wheeled robots, with only one or two wheels.

* Two-wheeled balancing: While the Segway is not commonly thought of as a robot, it can be thought of as a component of a robot. Several real robots do use a similar dynamic balancing algorithm, and NASA's Robonaut has been mounted on a Segway.[23]
* Ballbot: Carnegie Mellon University researchers have developed a new type of mobile robot that balances on a ball instead of legs or wheels. "Ballbot" is a self-contained, battery-operated, omnidirectional robot that balances dynamically on a single urethane-coated metal sphere. It weighs 95 pounds and is the approximate height and width of a person. Because of its long, thin shape and ability to maneuver in tight spaces, it has the potential to function better than current robots can in environments with people.[24]
* Track Robot: Another type of rolling robot is one that has tracks, like NASA's Urban Robot, Urbie. [25]

[edit] Walking Robots
iCub robot, designed by the RobotCub Consortium
iCub robot, designed by the RobotCub Consortium

Walking is a difficult and dynamic problem to solve. Several robots have been made which can walk reliably on two legs, however none have yet been made which are as robust as a human. Typically, these robots can walk well on flat floors, and can occasionally walk up stairs. None can walk over rocky, uneven terrain. Some of the methods which have been tried are:

* Zero Moment Point (ZMP) Technique: is the algorithm used by robots such as Honda's ASIMO. The robot's onboard computer tries to keep the total inertial forces (the combination of earth's gravity and the acceleration and deceleration of walking), exactly opposed by the floor reaction force (the force of the floor pushing back on the robot's foot). In this way, the two forces cancel out, leaving no moment (force causing the robot to rotate and fall over).[26] However, this is not exactly how a human walks, and the difference is quite apparent to human observers, some of whom have pointed out that ASIMO walks as if it needs the lavatory.[27][28][29] ASIMO's walking algorithm is not static, and some dynamic balancing is used (See below). However, it still requires a smooth surface to walk on.
* Hopping: Several robots, built in the 1980s by Marc Raibert at the MIT Leg Laboratory, successfully demonstrated very dynamic walking. Initially, a robot with only one leg, and a very small foot, could stay upright simply by hopping. The movement is the same as that of a person on a pogo stick. As the robot falls to one side, it would jump slightly in that direction, in order to catch itself.[30] Soon, the algorithm was generalised to two and four legs. A bipedal robot was demonstrated running and even performing somersaults.[31] A quadruped was also demonstrated which could trot, run, pace and bound.[32] For a full list of these robots, see the MIT Leg Lab Robots page.
* Dynamic Balancing: A more advanced way for a robot to walk is by using a dynamic balancing algorithm, which is potentially more robust than the Zero Moment Point technique, as it constantly monitors the robot's motion, and places the feet in order to main stability.[33] This technique was recently demonstrated by Anybots' Dexter Robot,[34] which is so stable, it can even jump.[35]
* Passive Dynamics: Perhaps the most promising approach being taken is to use the momentum of swinging limbs for greater efficiency. It has been shown that totally unpowered humanoid mechanisms can walk down a gentle slope, using only gravity to propel themselves. Using this technique, a robot need only supply a small amount of motor power to walk along a flat surface or a little more to walk up a hill. This technique promises to make walking robots at least ten times more efficient than ZMP walkers, like ASIMO.[36][37]


[edit] Other methods of locomotion
RQ-4 Global Hawk Unmanned Aerial Vehicle. No pilot means no windows.
RQ-4 Global Hawk Unmanned Aerial Vehicle. No pilot means no windows.

* Flying: A modern passenger airliner is essentially a flying robot, with two humans to attend it. The autopilot can control the plane for each stage of the journey, including takeoff, normal flight and even landing.[citation needed] Other flying robots are completely automated, and are known as Unmanned Aerial Vehicles (UAVs). They can be smaller and lighter without a human pilot, and fly into dangerous territory for military surveillance missions. Some can even fire on targets under command. UAVs are also being developed which can fire on targets automatically, without the need for a command from a human. Other flying robots include cruise missiles, the Entomopter and the Epson micro helicopter robot.

Two robot snakes. Left one has 32 motors, the right one 10.
Two robot snakes. Left one has 32 motors, the right one 10.

* Snake: Several snake robots have been successfully developed. Mimicking the way real snakes move, these robots can navigate very confined spaces, meaning they may one day be used to search for people trapped in collapsed buildings.[38] The Japanese ACM-R5 snake robot [39] can even navigate both on land and in water.[40]
* Skating: A small number of skating robots have been developed, one of which is a multi-mode walking and skating device, Titan VIII. It has four legs, with unpowered wheels, which can either step or roll[41]. Another robot, Plen, can use a miniature skateboard or rollerskates, and skate across a desktop.[42]
* Swimming: It is calculated that some fish can achieve a propulsive efficiency greater than 90%. [43] Furthermore, they can accelerate and manoeuver far better than any man-made boat or submarine, and produce less noise and water disturbance. Therefore, many researchers studying underwater robots would like to copy this type of locomotion.[44] Notable examples are the Essex University Computer Science Robotic Fish[45], and the Robot Tuna built by the Institute of Field Robotics, to analyse and mathematically model thunniform motion.[46]


[edit] Human interaction
Kismet (robot) can produce a range of facial expressions
Kismet (robot) can produce a range of facial expressions

If robots are to work effectively in homes and other non-industrial environments, the way they are instructed to perform their jobs, and especially how they will be told to stop will be of critical importance. The people who interact with them may have little or no training in robotics, and so any interface will need to be extremely intuitive. Science fiction authors also typically assume that robots will eventually communicate with humans by talking, gestures and facial expressions, rather than a command-line interface. Although speech would be the most natural way for the human to communicate, it is quite unnatural for the robot. It will be quite a while before robots interact as naturally as the fictional C3P0.

* Speech Recognition: Interpreting the continuous flow of sounds coming from a human, in real time, is a difficult task for a computer, mostly because of the great variability of speech. The same word, spoken by the same person may sound different depending on local acoustics, volume, the previous word, whether or not the speaker has a cold, etc.. It becomes even harder when the speaker has a different accent.[47] Nevertheless, great strides have been made in the field since Davis, Biddulph, and Balashek designed the first "voice input system" which recognized "ten digits spoken by a single user with 100% accuracy" in 1952.[48] Currently, the best systems can recognise continuous, natural speech, up to 160 words per minute, with an accuracy of 95%.[49]
* Gestures: One can imagine, in the future, explaining to a robot chef how to make a pastry, or asking directions from a robot police officer. On both of these occasions, making hand gestures would aid the verbal descriptions. In the first case, the robot would be recognising gestures made by the human, and perhaps repeating them for confirmation. In the second case, the robot police officer would gesture to indicate "down the road, then turn right". It is quite likely that gestures will make up a part of the interaction between humans and robots.[50] A great many systems have been developed to recognise human hand gestures.[51]
* Facial expression: Facial expressions can provide rapid feedback on the progress of a dialog between two humans, and soon it may be able to do the same for humans and robots. A robot should know how to approach a human, judging by their facial expression and body language. Whether the person is happy, frightened or crazy-looking affects the type of interaction expected of the robot. Likewise, a robot like Kismet can produce a range of facial expressions, allowing it to have meaningful social exchanges with humans.[52]
* Personality: Many of the robots of science fiction have personality, and that is something which may or may not be desirable in the commercial robots of the future.[53] Nevertheless, researchers are trying to create robots which appear to have a personality[54][55]: i.e. they use sounds, facial expressions and body language to try to convey an internal state, which may be joy, sadness or fear. One commercial example is Pleo, a toy robot dinosaur, which can exhibit several apparent emotions.[56]

[edit] Control

The mechanical structure of a robot must be controlled to perform tasks. The control of a robot involves three distinct phases - perception, processing and action (robotic paradigms). Sensors give information about the environment or the robot itself (e.g. the position of its joints or its end effector). Using strategies from the field of control theory, this information is processed to calculate the appropriate signals to the actuators (motors) which move the mechanical structure. The control of a robot involves path planning, pattern recognition, obstacle avoidance, etc. More complex and adaptable control strategies can be referred to as artificial intelligence.

[edit] Dynamics and kinematics

The study of motion can be divided into kinematics and dynamics. Direct kinematics refers to the calculation of end effector position, orientation, velocity and acceleration when the corresponding joint values are known. Inverse kinematics refers to the opposite case in which required joint values are calculated for given end effector values, as done in path planning. Some special aspects of kinematics include handling of redundancy (different possibilities of performing the same movement), collision avoidance and singularity avoidance. Once all relevant positions, velocities and accelerations have been calculated using kinematics, methods from the field of dynamics are used to study the effect of forces upon these movements. Direct dynamics refers to the calculation of accelerations in the robot once the applied forces are known. Direct dynamics is used in computer simulations of the robot. Inverse dynamics refers to the calculation of the actuator forces necessary to create a prescribed end effector acceleration. This information can be used to improve the control algorithms of a robot.

In each area mentioned above, researchers strive to develop new concepts and strategies, improve existing ones and improve the interaction between these areas. To do this, criteria for "optimal" performance and ways to optimize design, structure and control of robots must be developed and implemented.

[edit] External links

* The “official” Hall of Fame for robots Voting is currently underway for a new round of inductees.
* Robot news and Robotics information
* Small robots drive trains – A tutorial discussing the different techniques used to build the chassis and drive trains of relatively small robots
* A review of robotics software platforms Linux Devices.
* UNSW Computational Mechanics and Robotics Group
* Robotics news, theory of robotics
* "How Robots Work"
* JPL's Robotic website

[edit] References

1. ^ Definition of robotics - Merriam-Webster Online Dictionary. Retrieved on 2007-08-26.
2. ^ Industry Spotlight: Robotics from Monster Career Advice. Retrieved on 2007-08-26.
3. ^ Isaac Asimov. Isaac Asimov's Robotics FAQ. Retrieved on 2008-02-29.
4. ^ Asimov, Isaac (2003). Gold. Eos.
5. ^ Deborah Levine Gera. Ancient Greek Ideas on Speech, Language, and Civilization. Retrieved on 2007-12-31.
6. ^ BBC NEWS. Retrieved on 2007-08-26.
7. ^ Robotics: About the Exhibition. Retrieved on 2007-08-26.
8. ^ Piezo LEGS® - Technology. Piezomotor. Retrieved on 2007-09-26.
9. ^ Squiggle Motors: Overview. Retrieved on 2007-10-08.
10. ^ Nishibori et al. (2003). "Robot Hand with Fingers Using Vibration-Type Ultrasonic Motors (Driving Characteristics)". Journal of Robotics and Mechatronics. Retrieved on 2007-10-09.
11. ^ Yamano and Maeno (2005). "Five-fingered Robot Hand using Ultrasonic Motors and Elastic Elements" (PDF). Proceedings of the 2005 IEEE International Conference on Robotics and Automation. Retrieved on 2007-10-09.
12. ^ Shadow Robot Company: Air Muscles. Retrieved on 2007-10-15.
13. ^ ElectroActive Polymers - EAPs. Azom.com The A-Z of Materials. Retrieved on 2007-10-15.
14. ^ Yoseph Bar-Cohen (2002). "Electro-active polymers: current capabilities and challenges" (PDF). Proceedings of the SPIE Smart Structures and Materials Symposium. Retrieved on 2007-10-15.
15. ^ Graham-Rowe, Duncan (2002-03-08). "Arm wrestling robots beaten by a teenaged girl". New Scientist. Retrieved on 2007-10-15. 
16. ^ Otake et al. (2001). "Shape Design of Gel Robots made of Electroactive Polymer Gel" (PDF). Retrieved on 2007-10-16.
17. ^ John D. Madden, 2007, Mobile Robots: Motor Challenges and Materials Solutions, Science 16 November 2007: Vol. 318. no. 5853, pp. 1094 - 1097, DOI: 10.1126/science.1146351
18. ^ What is a a robotic end-effector?. ATI Industrial Automation (2007). Retrieved on 2007-10-16.
19. ^ Crane, Carl D.; Joseph Duffy (1998-03). Kinematic Analysis of Robot Manipulators. Cambridge University Press. ISBN 0521570638. Retrieved on 2007-10-16.
20. ^ Allcock, Andrew (2006-09). Anthropomorphic hand is almost human. Machinery. Retrieved on 2007-10-17.
21. ^ Shadow Dextrous Hand technical spec
22. ^ G.J. Monkman, S. Hesse, R. Steinmann & H. Schunk – Robot Grippers - Wiley, Berlin 2007
23. ^ ROBONAUT Activity Report. NASA (2004-02). Retrieved on 2007-10-20.
24. ^ Carnegie Mellon (2006-08-09). "Carnegie Mellon Researchers Develop New Type of Mobile Robot That Balances and Moves on a Ball Instead of Legs or Wheels". Press release.
25. ^ http://www-robotics.jpl.nasa.gov/systems/system.cfm?System=4#urbie
26. ^ Achieving Stable Walking. Honda Worldwide. Retrieved on 2007-10-22.
27. ^ Funny Walk. Pooter Geek (2004-12-28). Retrieved on 2007-10-22.
28. ^ ASIMO's Pimp Shuffle. Popular Science (2007-01-09). Retrieved on 2007-10-22.
29. ^ Vtec Forum: A drunk robot? thread
30. ^ 3D One-Leg Hopper (1983-1984). MIT Leg Laboratory. Retrieved on 2007-10-22.
31. ^ 3D Biped (1989-1995). MIT Leg Laboratory.
32. ^ Quadruped (1984-1987). MIT Leg Laboratory.
33. ^ About the robots. Anybots. Retrieved on 2007-10-23.
34. ^ Homepage. Anybots. Retrieved on 2007-10-23.
35. ^ Dexter Jumps video. YouTube (2007-03). Retrieved on 2007-10-23.
36. ^ Collins, Steve; Wisse, Martijn; Ruina, Andy; Tedrake, Russ (2005-02-11). "Efficient bipedal robots based on passive-dynamic Walkers" (PDF). Science (307): 1082-1085. Retrieved on 2007-09-11. 
37. ^ Collins, Steve; Ruina, Andy. "A bipedal walking robot with efficient and human-like gait". Proc. IEEE International Conference on Robotics and Automation..
38. ^ Miller, Gavin. Introduction. snakerobots.com. Retrieved on 2007-10-22.
39. ^ ACM-R5
40. ^ Swimming snake robot (commentary in Japanese)
41. ^ Commercialized Quadruped Walking Vehicle "TITAN VII". Hirose Fukushima Robotics Lab. Retrieved on 2007-10-23.
42. ^ Plen, the robot that skates across your desk. SCI FI Tech (2007-01-23). Retrieved on 2007-10-23.
43. ^ Sfakiotakis, et al. (1999-04). "Review of Fish Swimming Modes for Aquatic Locomotion" (PDF). IEEE Journal of Oceanic Engineering. Retrieved on 2007-10-24.
44. ^ Richard Mason. What is the market for robot fish?.
45. ^ Robotic fish powered by Gumstix PC and PIC. Human Centred Robotics Group at Essex University. Retrieved on 2007-10-25.
46. ^ Witoon Juwarahawong. Fish Robot. Institute of Field Robotics. Retrieved on 2007-10-25.
47. ^ Survey of the State of the Art in Human Language Technology: 1.2: Speech Recognition
48. ^ Fournier, Randolph Scott., and B. June. Schmidt. "Voice Input Technology: Learning Style and Attitude Toward Its Use." Delta Pi Epsilon Journal 37 (1995): 1_12.
49. ^ History of Speech & Voice Recognition and Transcription Software. Dragon Naturally Speaking. Retrieved on 2007-10-27.
50. ^ Waldherr, Romero & Thrun (2000). "A Gesture Based Interface for Human-Robot Interaction" (PDF). Kluwer Academic Publishers. Retrieved on 2007-10-28.
51. ^ Markus Kohler. Vision Based Hand Gesture Recognition Systems. University of Dortmund. Retrieved on 2007-10-28.
52. ^ Kismet: Robot at MIT's AI Lab Interacts With Humans. Sam Ogden. Retrieved on 2007-10-28.
53. ^ (Park et al. 2005) Synthetic Personality in Robots and its Effect on Human-Robot Relationship
54. ^ National Public Radio: Robot Receptionist Dishes Directions and Attitude
55. ^ New Scientist: A good robot has personality but not looks
56. ^ Ugobe: Introducing Pleo

Retrieved from "http://en.wikipedia.org/wiki/Robotics"
Categories: Robotics | Robot films
Hidden categories: All articles with unsourced statements | Articles with unsourced statements since June 2
Take a Pick at (Crucial) The Sindularity....

* http://www-rohan.sdsu.edu/faculty/vinge/misc/singularity.html
* http://mindstalk.net/vinge/vinge-sing.html

Andy's More Resources...

* http://agostinitransforms.blogspot.com/

Business/Management Blogging at Tom Peters'

* http://www.geocities.com/omnigerencia/d.html

Andy's BIO

* http://agostinicv.blogspot.com/

Stephen Lockwood on Andy...

* http://testiomonialtwo.blogspot.com/

Newsreel ...
Loading...
GMAC on Andy

* http://testimonialone.blogspot.com/

Andy's Mentor for Over 15 Years...

* http://www.geocities.com/intoappliedscience/1.html

Andy & The Photo...

* http://www.geocities.com/dedicatoryphoto/1.html

Andy's Mentor for Over 17 Years

* http://agostinimentor.blogspot.com/

ACC GROUP WORLDWIDE on Andy

* http://www.geocities.com/accbio/1.html

A Photo for Mind Expansion...
A Photo for Mind Expansion...
Andres Agostini (Ich Bin Singularitarian!)
The Andres Agostini Globe !!!

* http://agostiniglobe.blogspot.com/

The Andres Agostini Herald !!!

* http://agostiniherald.blogspot.com/

The Andres Agostini Multiverse !!!

* http://agostinimultiverse.blogspot.com/

The Andres Agostini Times ....

* http://theandresagostinitimes.blogspot.com/

Leardership (briefly) ....
Beyond Serendipity (Andres Agostini) Ich bin Singularitarian

Authentic leadership is a function of those purposes, those commitments of service and the discipline anchoring them, not a function of self-expression.
Gepostet von Beyond Leadership (Andres Agostini) unter 18:18 0 Kommentare
Labels: Beyond Serendipity (Andres Agostini) Ich bin Singularitarian
Beyond Serendipity (Andres Agostini) Ich bin Singularitarian

Leadership is a function of a leader, follower, and situation that are appropriate for one another.
Gepostet von Beyond Leadership (Andres Agostini) unter 18:17 0 Kommentare
Labels: Beyond Serendipity (Andres Agostini) Ich bin Singularitarian
Beyond Serendipity (Andres Agostini) Ich bin Singularitarian

Leadership is a function of knowing yourself, having a vision that is well communicated, building trust among colleagues, and taking effective action to ...
Gepostet von Beyond Leadership (Andres Agostini) unter 18:16 0 Kommentare
Labels: Beyond Serendipity (Andres Agostini) Ich bin Singularitarian
Beyond Serendipity (Andres Agostini) Ich bin Singularitarian

Leadership is a function of team unity.
Reflections on Womb-To-Tomb Management Practices:

Transformative Risk Management by Andres Agostini

HAZARD is a function of a solvent's toxicity and the amount which volatilizes and ...

Publicado por Transformative Risk Management (Andres Agostini) en 13:54 0 comentarios

Etiquetas: Transformative Risk Management by Andres Agostini

Transformative Risk Management by Andres Agostini

HAZARD is a function of the way a chemical is. produced, used or discarded.

Publicado por Transformative Risk Management (Andres Agostini) en 13:53 0 comentarios

Etiquetas: Transformative Risk Management by Andres Agostini

Transformative Risk Management by Andres Agostini

HAZARD is a function of the probability.

Publicado por Transformative Risk Management (Andres Agostini) en 13:51 0 comentarios

Etiquetas: Transformative Risk Management by Andres Agostini

Transformative Risk Management by Andres Agostini

Risk is a function of hazard, exposure and dose. Even a hazardous material doesn't pose risk if there is no exposure.

Publicado por Transformative Risk Management (Andres Agostini) en 13:50 0 comentarios

Etiquetas: Transformative Risk Management by Andres Agostini

Transformative Risk Management by Andres Agostini

RISK is a function of the existance of the human beign, who lives in a social environment under permanent variation.

Publicado por Transformative Risk Management (Andres Agostini) en 13:48 0 comentarios

Etiquetas: Transformative Risk Management by Andres Agostini

Transformative Risk Management by Andres Agostini

RISK is a function of the chemical’s. toxicity and exposure to it.

Publicado por Transformative Risk Management (Andres Agostini) en 13:47 0 comentarios

Etiquetas: Transformative Risk Management by Andres Agostini

Transformative Risk Management by Andres Agostini

Risk is a function of price but risk can certainly be a matter of perception. Real risk, perceived risk and relative risk has flowed like a river through much of human history, slightly chaotic in nature and a little bit dangerous, no?

Enterprise Hazard Termination (Andres Agostini) Ich bin Singularitarian

REFLECTION ON "ENTERPRISE HAZARDS"

The 'risk' posed by the 'hazard' is a function of the probability > (lightening strike . Conductor failure) and the consequence (death, > financial loss etc ...... The 'risk' posed by the 'hazard' is a function of the probability (lightening strike . ..... The potential hazard is a function of the following:. The exposure time (chronic or acute); The irradiance value (a function of both the image size and the ...... Seismic hazard is a function of the acceleration coefficient…..Hazard is a function of the way a chemical is. produced, used or discarded…..Relative Inhalation Hazard at Room Temperature: The relative inhalation hazard is a function of a solvent's toxicity and the amount which volatilizes and ...... The relative fire hazard is a function of. at what temperature the material will give off flammable vapors which when come in contact with a ...... Hazard is a function of the toxicity of a pesticide and the potential for exposure to it. We do not have control of the toxicity of a pesticide since ...... In the products liability context, the obviousness of a hazard is a function of “the typical. user’s perception and knowledge and whether the relevant ...... It fails to take into account the fact that the assessment of the hazard is a function of the inspection carried out by the environmental health officer ....... Toxicity: the inherent capacity of a substance to produce an injury or death; Hazard: hazard is a function of toxicity and exposure; the potential threat ...... A hazard is a function of both the magnitude of a physical event such as an earthquake and the state of preparedness of the society that is affected by it……Risk, for any specific hazard, is a function of the severity of possible harm and the probability of the occurrence of that harm…..work, where the crime hazard is a function of, some baseline hazard common to all individuals, and explanatory variables……Erosion hazard is a function of soil texture, crop residue and slope……The degree of fire hazard is a function of a number of factors, such as fuel load, building structure, ignition, and propagation of flames, ....... While it is clear that the degree of hazard is a function of both velocity (v). and depth (d) (e.g., Abt et al., 1989), and that a flood with depth but no ...... The level of risk posed by a hazard is a function of the probability of exposure to that hazard and the extent of the harm that would be ....... seismic hazard is a function of failure probability vs. PFA (peak floor. acceleration). This function varies….. The degree of hazard is a function of the frequency of the presence of the ignitable gas or vapor. That is, the more often the ignitable gas or vapor ...... The degree of hazard is a function of the differing toxicity of the various forms of beryllium and of the type and magnitude of beryllium exposure…..Thus, hazard is a function of survival time. The cumulative hazard at a given time is the hazard integrated over the whole time interval until the given ...... and exposure is a function of the nature of emission sources, paths and receivers, and hazard is a function of chemical attributes and their myriad health ....... Often "hazard" is a function of the very properties which can be harnessed to create value for society (e.g. chemical reactivity)….. Hazard is a function of toxicity and exposure. If the toxicity is low and the exposure is low, then the hazard will be low…..hazard, is a function of a set of independent variables…..The relative inhalation hazard is a function of a solvent's toxicity and the amount that volatilizes and thus is available for inhalation at room ...... Electrical shock hazard is a function of the current through the human body. Current can be directly limited by design, by additional current limiting…..Seismic hazard is a function of the size, or magnitude of an earthquake, distance from the earthquake, local soils, and other factors, and is independent of…..Our response to a real or imagined hazard is a function of our perception of that hazard. In many situations, hazards are ignored or disregarded…..The same thing; Related, in that vulnerability is a function of hazard; Related, in that hazard is a function of vulnerability; Not related…..hazard is a function of the intrinsic properties of the chemical that relate to persistence, bioaccumulation potential and toxicity……expected damage or loss from a given hazard. Is a function of hazard characteristics (probability, intensity, extent) and vulnerability ...... The estimation of risk for a given hazard is a function of the relative likelihood of its occurrence and the severity of harm resulting from its ...... hazard is a function of growth pressures and the interest rate, as well as other. variables (e.g. development fees) that vary over time but not over parcels ....... Hazard is a function of two primary variables, toxicity. and exposure; and is the probability that injury will result ...... In this system, hazard is a function of the frequency. of weather conditions favorable to WPBR infection. Hazard is defined as potential stand damage……Prevention of destructibility of a hazard is a function of effective. preparedness and mitigation measures following an objective analysis of the ...... hazard is a function of the relative likelihood of its occurrence and the severity of harm resulting from its consequences…….Hazard is a function of the organism and is related to its ability to cause negative effects on humans, animals, or the ecosystem…..So the answer is all the airplanes are creating vortexes but the real issues is that the hazard is a function of a lot of characteristics but mostly the ...... hazard is a function of the elapsed time since the last seismic event and the physical dimensions of the related active fault segment…..The degree of hazard is a function of the chemical / physical properties of the. substance(s) and the quantities involved…..Thus, the hazard is a function of event, use, and. actions taken to reduce losses…..The degree of hazard is a function of both the probability that. backflow may occur and the toxicity or pathogenicity of the contaminant involved……The estimated hazard is a function of unemployment duration, but in the model it is a function of human capital. In order to map duration into human capital ...... The risk of each hazard is a function of the contaminant source, containment, transport. pathway and the receptor……The hazard is a function of current magnitude and time or the integral of current. This description of the background of the invention has emphasized ground ...... The level of risk posed by a hazard is a function of the probability of exposure to that hazard and the extent of the harm that would be caused by that ...... Hazard is a function of exposure and effect. Hazard assessment can be used to either refute or quantify potentially harmful effects, …..The hazard is a function of rainfall erosivity, slope (gradient and length), soil erodibility and the amount of vegetative protection on the surface……The estimation of risk for any given hazard is a function of the relative likelihood of its occurrence, and the severity of harm resulting from its ....... From durations to human capital: The estimated hazard is a function of unemployment. duration, but in the model it is a function of human capital…..

By Andres Agostini

Ich Bin Singularitarian!

www.geocities.com/agosbio/a.html

Management's Best Practices (Andres Agostini) Ich bin Singularitarian

MANAGEMENT REFLECTIONS (BUSINESS-plus)

Moral responsibility in corporate medical management is a function of the exercise of authority over different aspects of the medical decision making ...... management is a function of hazards mitigation and vulnerability reduction. This is a very simple understanding for people in disaster studies……Effective management is a function of developing proper individual or team performance measures and then monitoring those ....... Natural resource management is a function of managing County parks, reserves, and recreation areas. The Department of Parks and Recreation has developed and ....... Emergency management is a function of the department as well. This is a co-managed function of both the City of Kearney and Buffalo County……Management is a function of position, while leadership is a function of skill. Some of the most effective leaders I meet and observe in my work have no ....... GOOD STRATEGIC LEARNING MANAGEMENT IS A FUNCTION OF HOW WELL PREPARED THE COMPANY'S PSYCHE IS IN PRO-ACTIVELY BLENDING THE MASSIVE CHANGES TAKING PLACE IN ....... risk management is a function of impact management. Define project support function…….Management is a function of planning, organizing, controlling leading, and staffing. Management is function of activities and ........ Management is a function of every stakeholder in an oganisation. If all this "management" is working towards well defined and appropriate objectives then it ....... Money management is a function of determining how much of your account to risk...on any given trade or for that matter, any given strategy…..Crisis management is a function of all public, private and non-profit organizations, supporting their fundamental strategic objective of ensuring ....... Public Sector Financial Management is a function of the Department of Treasury and Finance and Budget Management is one of its activities……that ecological pest management is a function of “many little hammers, but no silver bullet.”……. management is a function of the socio-economic factors;….. management is a function of the quality and consistency of routine operations……It is recognised that people management is a function of partners and managers, no matter how senior they might be. Psychological profiling can help firms……SOCIAL CONSTRUCTION, AND, THEREFORE, KNOWLEDGE MANAGEMENT IS A FUNCTION OF SOCIAL STRUCTURE.THE TOOL IS CALLED SOCIAL NETWORK ANALYSIS……the management is a function of the management’s forecast and actual aggregate demand of the next period……Collateral management is a function of ever-growing. importance to the futures industry. With operating margins. coming under increasing scrutiny,….. Introduction to crisis management as it is applied in public, private, and non-profit organizations; crisis management is a function of all organizations ……advancement and participation in project management is a function of the type of organizational culture which has traditionally…..Discusses how strategic management is a function of the cognitive, experiential and informational skills of the manager…….Request For Proposal management is a function of creating a detailed and concise document…….Nursing management is a function of the personnel department of a business. It deals with a system in high tension, with a network of interrelated ....... The proportionality of the two chosen by management is a function of values, acumen, environment, and situation. How do purchasing professionals reach some ....... Budget management is a function of the chair that requires teamwork with individuals both inside and outside the department…….The cost of memory management is a function of the allocation cost of memory associated with an instance of a type, the cost of managing that memory over ....... Management is a function of position and authority, leadership is not dependent on either, but is a function of personality……Therefore, disaster risk management is a function of hazards mitigation and vulnerability reduction…….Today in team-based knowledge-centric enterprises, management is a function of consensus building, motivating employees and convincing others……Crisis management is a function of anticipation and planning before the crisis occurs. ¨ Client information. Information about clients that should be shared ....... advertising management is a function of marketing starting from market research continuing through advertising leading to actual sales or achievement of ....... Inventory Management is a function of central importance in manufacturing control. It is an evolving discipline which encompasses the principles, ....... Knowledge management is a function of the generation and dissemination of information, developing a shared understanding of the information, ........ Today crisis management is a function of information management. Respond! aims at improving emergency management communication and enlarging the knowledge ......... time management is a function of how we manage the passing of time, we have little control over the situation…….The effectiveness of the application to stormwater management is a function of the hydraulic design of the bioretention system…….management is a function of the difference between the benefit received from management and that which can be acquired from alternative outcome ....... Contract management is a function of both project management and financial management. The functions of contract……Territory management, like time management, is a function of many attitudes, habits, values, skills and beliefs. It is also a function of: ....... Vegetation treatment and management is a function of site Development Scale – development scale will determine the intensity of treatment activities, etc. ....... The ability to self-manage is a function of individual differences and. is, therefore, dependent upon many variables, including specific biological,…… How you manage is a function of your personality. Two extremes:. Too much management. Too little management…...

By Andres Agostini

Ich Bin Singularitarian!

www.geocities.com/agosbio/a.html
Leonardo by Andy...

* http://www.geocities.com/davincianleo/1.html

Andy’s Watch/Clock Indicates This Time:

* http://www.time.gov/timezone.cgi?Eastern/d/-5/java

Andy as per Alexa! ....

* http://www.alexa.com/search?q=%22andres+agostini%22&page=3&count=10

Key Links ....

* http://www.google.com/search?hl=en&q=%22andres+agostini%22&btnG=Google+Search
* http://search.yahoo.com/search?p=%22andres+agostini%22&fr=yfp-t-501&toggle=1&cop=mss&ei=UTF-8
* http://search.msn.com/results.aspx?q=%22andres+agostini%22&FORM=MSNH
* http://en.wikipedia.org/
* http://www.google.com
* http://www.youtube.com/i
* http://www.facebook.com
* http://www.amazon.com
* http://www.amazon.co.uk/
* http://www.barnesandnoble.com
* http://www.cnn.com
* http://www.foxnews.com
* http://www.ebay.com
* http://www.wikinomics.org
* http://www.yahoo.com
* http://www.msn.com
* http://www.microsoft.com
* http://www.google.com
* http://www.live.com
* http://www.alexa.com/
* http://www.alexa.com/search?q=%22andres+agostini%22&page=3&count=10

Andy Webcasted....

* http://agostiniwebcasted.blogspot.com/

WikiBloggist ! ! ! - Andres Agostini's Thorough Blog - Arlington, Virginia, USA

* http://wikibloggist.blogspot.com/

Andres Agostini BIO! (Arlington, Virginia, USA)

* http://agoscv.blogspot.com/

COMPLETE SCIENCE ...

* http://completescience.blogspot.com/

Dispatches ...Andy

* http://search.msn.com/results.aspx?q=%22andres+agostini%22+%22Dispatches+from+the+New+World+of+Work%22&go=Search&form=QBAA

Comments + Blogging:

On the Future of Quality !!!

"Excellence is important. To everyone excellence means something a bit different. Do we need a metric for excellence? But, Why do I believe that the qualitative side of it is more important than its numericalization. By the way, increasing tsunamis of vanguard sciences and corresponding technologies to be applied bring about the upping of the technical parlance.

These times as Peter Schwartz would firmly recommend require to “pay” the highest premium for leading knowledge.
“Chindia” (China and India) will not wait for the West. People like Ballmer (Microsoft) and Ray Kurzweil insist that current levels of complexity –that one can manage appropriately and timely- might get one a nice business success.

Yes, simple is beautiful, but horrendous when this COSMOS is overwhelmed with paradoxes, contradictions, and predicaments. And you must act to capture success and, overall, to make sustainable.

Quality is crucial. Benchmarks are important but refer to something else, though similar. But Quality standards, as per my view, would require a discipline to be named “Systems Quality Assurance.” None wishes defects/waste.

But having on my hat and vest of strategy and risk management, the ultimate best practices of quality –in many settings- will not suffice. Got it add, (a) Systems Security, (b) Systems Safety, (c) Systems Reliability, (d) Systems Strategic Planning/Management and a long “so forth.”

When this age of changed CHANGE is so complex like never ever –and getting increasingly more so- just being truly excellent require, without a fail, many more approaches and stamina."

Posted by Andres Agostini at February 22, 2008 9:18 PM

Posted by Andres Agostini on This I Believe! (AATIB) at 6:25 PM 0 comments

Labels: www.AndresAgostini.blogspot.com, www.andybelieves.blogspot.com

Commenting on the Future of Quality….

Excellence is important. To everyone excellence means something a bit different. Do we need a metric for excellence? But, Why do I believe that the qualitative side of it is more important than its numericalization. By the way, increasing tsunamis of vanguard sciences and corresponding technologies to be applied bring about the upping of the technical parlance.

These times as Peter Schwartz would firmly recommend require to “pay” the highest premium for leading knowledge.

“Chindia” (China and India) will not wait for the West. People like Ballmer (Microsoft) and Ray Kurzweil insist that current levels of complexity –that one can manage appropriately and timely- might get one a nice business success.

Yes, simple is beautiful, but horrendous when this COSMOS is overwhelmed with paradoxes, contradictions, and predicaments. And you must act to capture success and, overall, to make it sustainable and fiscally sound.

Quality is crucial. Benchmarks are important but refer to something else, though similar. But Quality standards, as per my view, would require a discipline to be named “Systems Quality Assurance.” None wishes defects/waste.

But having on my hat and vest of strategy and risk management, the ultimate best practices of quality –in many settings- will not suffice. Got it add, (a) Systems Security, (b) Systems Safety, (c) Systems Reliability, (d) Systems Strategic Planning/Management and a long “so forth.”

When this age of changed CHANGE is so complex like never ever –and getting increasingly more so- just being truly excellent require, without a fail, many more approaches and stamina.

Posted by Andres Agostini at February 22, 2008 9:18 PM

Posted by Andres Agostini on This I Believe! (AATIB) at 6:36 PM 0 comments

Labels: www.AndresAgostini.blogspot.com, www.andybelieves.blogspot.com

Comments: Hard Work Matters

"Clearly, hard work is extremely important. There is a grave lack of practices of this work philosophy in the battlefield. Practicing, practicing and practicing is immeasurably relevant.

Experience accumulated throughout the years is also crucial, particularly when one is always seeking mind-expansion activities.

With it practical knowledge comes along. When consulting and training, yes, you’re offering ideas to PRESENT clients with CHOICES/OPTIONS to SOLUTIONS.

How to communicate with the client is extremely difficult. Nowadays, some technical solutions that the consultant or advisor must implement has a depth that will shock the client unless there is a careful and prudent preparation/orientation of the targeted audience.

Getting to know the company culture is another sine qua non. The personal cosmology of each executive or staff involved on behalf of the client is even more important. Likewise, the professional service expert must do likewise with the CEO, and Chairman.

In fact, in your notes, a serious consultant must have an unofficial, psychological profile of the client representatives. One has to communicate unambiguously, but sometimes helps to adapt your lexicon to that of the designated client.

From interview one –paying strong attention and listening up to the customer– the advisor must give choices while at always being EDUCATIONAL, INFORMATIVE, and, somehow, FORMATIVE/INDUCTIVE. That’s the problem.

These times are not those. When the third party possesses the knowledge, skill, know-how, technology, he/she now must work much more in ascertaining you lock in your customer’s mind and heart with yours.

Before starting the CONSULTING EFFORT, I personally like to have a couple of informal meetings just to listen up and listen up.

Then, I forewarn them that I will be making a great number of questions. Afterwards, I take extensive notes and start crafting the strategy to build up rapport with this customer.

Taking all the information given informally in advance by the client, I make an oral presentation to assure I understood what the problem is. I also take this opportunity to capture further information and to relax everyone, while trying to win them over legitimately and transparently.

Then, if I see, for instance, that they do not know how to call/express lucidly/with accurateness their problem, I ask questions. But I also offer real-life examples of these probable problems with others clients.

The opportunity is absolutely vital to gauge the level of competency of the customer and knowledge or lack of knowledge about the issue. Passing all of that over, I start, informally, speaking of options to get the customer involved in peaking out the CHOICE (the solution) to watch for initial client’s reactions.

In my case and in many times, I must not only transfer the approaches/skills/technologies, but also institute and sustain it to the 150% satisfaction of my clients.

Those of us, involved with Systems Risk Management(*) (“Transformative Risk Management”) and Corporate Strategy are obliged to scan around for problems, defects, process waste, failure, etc. WITH FORESIGHT.

Once that is done and still “on guard,” I can highlight the opportunity (upside risk) to the client.

Notwithstanding, once you already know your threats, vulnerabilities, hazards, and risks (and you have a master risk plan, equally contemplated in your business plan), YOU MUST BE CREATIVE SO THAT “HARD WORK” MAKES A UNIQUE DIFFERENCE IN YOUR INDUSTRY.

While at practicing, do so a zillion low-cost experiments. Do a universe of Trial and Errors. Commit to serendipity and/or pseudo-serendipity. In the mean time, and as former UK Prime Minister Tony Blair says: “EDUCATION, EDUCATION, EDUCATION.”

(*) It does not refer at all to insurance, co-insurance, reinsurance. It is more about the multidimensional, cross-functional management of business processes to be goals and objectives compliant."

Posted by Andres Agostini at February 23, 2008 4:56 PM

Posted by Andres Agostini on This I Believe! (AATIB) at 1:58 PM 0 comments

Labels: www.AndyBelieves.blogspot.com/

Future Shape of Quality

“I like the dreams of the future better than the history of the past.” (Jefferson). In a world –once called the “society of knowledge”- that is getting (society, economics, [geo] politics, technology, environment, so forth) more and more sophisticated in over-exponential rates. Ray Kurzweil in “The Singularity is Near” assures that, mathematically speaking, the base and the exponent of the power are increasingly chaotically jumping, almost as if this forthcoming “Cambrian explosion,” bathed with the state of the art applied will change everything.

Friedrich Wilhelm Nietzsche, the German philosopher, reminds one, “It is our future that lays down the law of our work.” While Churchill tells us, “the empires of the future belong to the [prepared] mind.”

Last night I was reading the text book “Wikinomics.” Authors say that in the next 50 years applied science will be much more evolved than that of the past 400 years. To me, and because of my other reaserch, they are quite conservative. Vernor Vinge, the professor of mathematics, recalls us about the “Singularity,” primarily technological and secondarily social (with humans that are BIO and non BIO and derivatives of the two latter, i.e. in vivo + in silico + in quantum + in non spiritus). Prof. Vinge was invited by NASA on that occasion. If one like to check it out, Google it.

Clearly, Quality Assurance progress has been made by Deming, Juran, Six Sigma, Kaisen (Toyota) and others. I would pay strong attention to their respective prescriptions with an OPEN MIND. Why? Because SYSTEMS are extremely dynamically these days, starting up with the Universe (or “Multiverse”). As I operate with risks and strategies –beyond the view of (a) strategic planner, and (b) practitioner of management best practices à la non ad hoc “project management,” I have to take advantage of many other methodologies.

The compilation of approaches is fun though must be extremely cohesive, congruent, and efficacious.

And if the economy grows more complex, many more methodologies I will grab. I have one of my own that I called “Transformative Risk Management,” highly based on the breakthrough by Military-Industrial (-Technological) Complex. Chiefly, with the people concerned with nascent NASA (Mercury, Saturn, Apollo) via Dr. Wernher von Braun, then engineer in chief. Fortunately, my mentor, a “doctor in science” for thirteen years was von Braun’s risk manager. He’s now my supervisor.

The Military-Industrial (-Technological) Complex had a great deal of challenges back in 1950. As a result, many breakthroughs were brought about. Today, not everyone seems to know and/or institute these findings. Some do as ExxonMobil. The text book “Powerful Times” attributes to U.S. defense budget a nearly 50% of the totality of the worldwide defense budgets. What do they do with this kind of money? They instill –to a great extent- to R&D labs of prime quality. Afterwards, they shared “initiatives” with R&D labs from Universities, Global Corporations, and “Wiki” Communities. Imagine?

In addition, the grandfather of in-depth risk analyses is one that goes under many names beside Hazard Mode and Effect Analysis (HMEA). It has also been called Reliability Analysis for Preliminary Design (RAPD), Failure Mode and Effect Analysis (FMEA), Failure Mode, Effect, and Criticality Analysis (FMECA), and Fault Hazard Analysis (FHA). All of these – just to give an example – has to be included in your methodical toolkit alongside with Deming’, Juran’, Six Sigma, Kaisen’s.

These fellow manage with what they called “the omniscience perspective,” that is, the totality of knowledge. Believe me, they do mean it.

Yes, hard-working, but knowing what you’re doing and thinking always in the unthinkable, being a foresight-er, and assimilating documented “lesson learned” from previous flaws. In the mean time, Sir Francis Bacon wrote, “He that will not apply remedies must expect new evils; for time is the greatest innovator.”

(*) A "killer" to "common sense" activist. A blessing to rampantly unconventional- wisdom practitioner.

For the “crying” one, everything has changed. It has changed (i) CHANGE, (ii) Time, (iii) Politics/Geopolitics, (iv) Science and technology (applied), (v) Economy, (vi) Environment (amplest meaning), (vii) Zeitgeist (spirit of times), (viii) Weltstanchaung (conception of the world), (ix) Zeitgeist-Weltstanchaung’s Prolific Interaction, etc. So there is no need to worry, since NOW, —and everyday forever (kind of...)—there will be a different world, clearly if one looks into the sub-atomic granularity of (zillion) details. Unless you are a historian, there is no need to speak of PAST, PRESENT, FUTURE, JUST TALK ABOUT THE ENDLESSLY PERENNIAL PROGRESSION. Let’s learn a difficult lesson easily NOW.

“Study the science of art. Study the art of science. Picture mentally… Draw experientially. Succeed through endless experimentation… It’s recommendable to recall that common sense is much more than an immense society of hard-earned practical ideas—of multitudes of life-learned rules and tendencies, balances and checks. Common sense is not just one (1), neither is, in any way, simple.” (Andres Agostini)

Dwight D. Eisenhower, speaking of leadership, said: “The supreme quality for leadership is unquestionably integrity. Without it, no real success is possible, no matter whether it is on a section gang, a football field, in an army, or in an office.”

“…to a level of process excellence that will produce (as per GE’s product standards) fewer than four defects per million operations…” — Jack Welch (1998).

In addition to WORKING HARD and taking your “hard working” as you beloved HOBBY and never as a burden, one may wish to institute, as well, the following:

1.- Servitize.

2.- Productize.

3.- Webify.

4.- Outsource (strategically “cross” sourcing).

5.- Relate your core business to “molutech” (molecular technology).

Search four primary goals (in case a reader is interested):

A.- To build trust.

B.- To empower employees.

C.- To eliminate unnecessary work.

D.- To create a new paradigm for your business enterprise, a [beyond] “boundaryless” organization.

E.- Surf dogmas; evade sectarian doctrines.

Posted by Andres Agostini at February 27, 2008 7:54 PM
Comments: Snide Advertising

Advertising and campaigning must enforce a strong strategic alliance with the client. The objective is to COMMUNICATE the firm’s products, services, values, ethos in a transparent and accountable way. Zero distortion tolerance as to the messages disseminated.


Ad agencies cannot make up for the shortcomings of the business enterprise. Those shortcomings consequential of a core business sup-optimally managed. Get the business optimum first. Then, communicate it clearly, being sensible to the community at large.


A funny piece is one thing. To make fun of others is another (terrible). To be creative in the message is highly desirable. If the incumbent’s corporation has unique attributes and does great business, just say it comprehensibly without manipulating or over-promising.


Some day soon the subject matter on VALUES is going to be more than indispensable to keep global society alive. The rampant violations of the aforementioned values should be death-to-life matter of study by ad agencies without a fail.


The global climate change, the flu pandemia (to be), the geology (earthquakes, volcanoes, tsunamis), large meteorites, nuke wars are all among the existential risks. To get matters worse, value violations by the ad agencies, mass media, and the rest of the economy would easily qualify as an existential risk.

Humankind requires transparency and accountability the soonest.
Posted by Andres Agostini at February 27, 2008 8:34 PM

Comments: Future Shape of Quality


Thank you all for your great contributions and insightfulness. Take a Quality Assurance Program, (e.g.), to be instituted in a company these days, century 2008. One will have to go through tremendous amounts of reading, writing, drawing, spread-sheeting, etc. Since the global village is the Society of Knowledge, these days, to abate exponential complexity, you must not only have to embrace it fully, you have to be thorough at all times to meet the challenge. One must also pay the price of an advanced global economy that is in increasingly perpetual innovation. Da Vinci, in a list of the 10 greatest minds, was # 1. Einstein was # 10. Subsequently, it’s highly recommendable, if one might wish, to pay attention to “Everything should be made as simple [from the scientific stance] as possible, but not simpler.” Mr. Peters, on the other hand, has always stressed the significance to continuously disseminate new ideas. He is really making an unprecedented effort in that direction. Another premium to pay, it seems to be extremely “thorough” (Trump).


Posted by Andres Agostini at February 28, 2008 3:11 PM

Comments: Cool Friend: C. Michael Hiam

We need, globally, to get into the “strongest” peaceful mind-set the soonest. Not getting to peace status via waging wars. Sometimes, experts and statesmen may require “chirurgical interventions,” especially under the monitoring of the U.N. diplomacy are called to be reinvented and taken to the highest possible state of refinement. More and more diplomacy and more and more refinement. Then, universal and aggressive enhance diplomacy instituted.

Posted by Andres Agostini at February 29, 2008 4:02 PM

Comments: Success Tips at ChangeThis

Comments: Success Tips at ChangeThis

I appreciate current contributions. I’d like to think that the nearly impossible is in you way (while you’re emphatically self-driven for accomplishments) with determined aggressive towards the ends (objectives, goals) to be met. Churchill offers a great deal of examples of how an extraordinary leader works out.

Many lessons to be drawn out from him, without a doubt. Churchill reminds, as many others, that (scientific) knowledge is power. Napoleon, incidentally, says that a high-school (lyceum) graduate, must study science and English (lingua franca).

So, the “soft knowledge” (values) plus the “hard knowledge” (science, technology) must converge into the leader (true statesman). Being updated in values and science and technology in century 21 –to be en route to being 99% success compliant- requires, as well, of an open mind (extremely self-critical) that is well prepared (Pasteur).

Posted by Andres Agostini at February 29, 2008 4:19 PM

Comments: Wiki Contributions

Comments: Wiki Contributions

My experience tells me that every client must be worked out to be your true ally. When you’re selling high-tech/novel technologies/products/services, one must do a lot of talking to induce the customer into a menu of probable solutions. The more the complications, the more the nice talk with unambiguous language.

If that phase succeeds, it’s necessary to make oral/document presentations to the targeted client. Giving him – while at it- a number of unimpeachable examples of the real life (industry by industry) will get the customer more to envision you as an ally than just a provider.

These continuous presentations are, of course, training/indoctrination to the customer, so that he understands better his problem and the breadth and scope of the likely solutions. If progress is made in this phase, one can start working out, very informally and distensibly, the clauses of the contract, particularly those that are daring. One by one.

When each one is finally approved by both. Assemble and get approved and implemented the corresponding contract. Then, keep a close (in-person) contact with your customer.

Posted by Andres Agostini at February 29, 2008 4:32 PM

Comments: It's Good to Talk!

I like to meet personally and working together with my peers. So, I can also work through the Web as I am on my own with added benefits of some privacy and other conveniences. A mix of both –as I think- is optimal.

How can one slow down the global economy trends? The more technological elapsed time get us, the more connected and wiki will we all be. Most of the interactions I see/experience on the virtual world with extreme consequences in the real world.

I think it’s nice and productive to exchange ideas over a cappuccino. The personal contact is nice. Though, it gets better where is less frequent. So, when it happens, the person met becomes a splendid occasion.

As things get more automated, so will get we. I, as none of you, invented the world. Automations will get to work more than machines. Sometimes, it of a huge help to get an emotional issue ventilated through calm, discerned e-mails.

Regardless of keeping on embracing connectedness (which I highly like), I would say one must make in-person meetings a must-do. Let's recall that we are en route to Vernor Vinge's "Singularity."

Posted by Andres Agostini at February 29, 2008 4:46 PM

Comments: A Focus on Talent

Comments: A Focus on Talent

The prescription to make a true talent as per the present standards is diverse. Within the ten most important geniuses, there is Churchill again. He is the (political) statesman # 1, from da Vinci’s times to the current moment. In one book (Last Lion), it is attributed to Churchill saying that a New Yorker –back then–transferred him some methodology to capture geniality.

A great deal of schooling is crucial. A great deal of self-schooling is even more vital. Being experienced in different tenures and with different industries and with different clients helps beyond belief.

Study/researching cross-reference (across the perspective of omniscience) helps even more. Seeking mentors and tutors helps. Get trained/indoctrinated in various fields does so too. Hiring consultants for your personal, individual induction/orientation add much.

Got it have an open mind with a gusto for multidimensionality and cross-functionality, harnessing and remembering useful knowledge all over, regardless of the context. I have worked on these and published some “success metaphors” in the Web, both text and video. Want it? Google it!

Learning different (even opposed) methodologies renders the combined advantages of all of the latter into a own, unique multi-approach of yours.
Most of these ideas can be marshaled concurrently.

Posted by Andres Agostini at February 29, 2008 5:11 PM



Andy Webcasted !!!

* http://www.youtube.com/watch?v=5IKs2xEdF_8
* http://www.youtube.com/watch?v=naIriKmDQNc
* http://www.youtube.com/watch?v=N83YH1-2eq0
* http://www.youtube.com/watch?v=uhZ-nlU5L_E
* http://www.youtube.com/watch?v=xkx30foLTDU

Andy's Blogging....

* http://agostinicomments2.blogspot.com/

Comments: Hard Work Matters:
"Clearly, hard work is extremely important. There is a grave lack of practices of this work philosophy in the battlefield. Practicing, practicing and practicing is immeasurably relevant.

Experience accumulated throughout the years is also crucial, particularly when one is always seeking mind-expansion activities.

With it practical knowledge comes along. When consulting and training, yes, you’re offering ideas to PRESENT clients with CHOICES/OPTIONS to SOLUTIONS.

How to communicate with the client is extremely difficult. Nowadays, some technical solutions that the consultant or advisor must implement has a depth that will shock the client unless there is a careful and prudent preparation/orientation of the targeted audience.

Getting to know the company culture is another sine qua non. The personal cosmology of each executive or staff involved on behalf of the client is even more important. Likewise, the professional service expert must do likewise with the CEO, and Chairman.

In fact, in your notes, a serious consultant must have an unofficial, psychological profile of the client representatives. One has to communicate unambiguously, but sometimes helps to adapt your lexicon to that of the designated client.

From interview one –paying strong attention and listening up to the customer– the advisor must give choices while at always being EDUCATIONAL, INFORMATIVE, and, somehow, FORMATIVE/INDUCTIVE. That’s the problem.

These times are not those. When the third party possesses the knowledge, skill, know-how, technology, he/she now must work much more in ascertaining you lock in your customer’s mind and heart with yours.

Before starting the CONSULTING EFFORT, I personally like to have a couple of informal meetings just to listen up and listen up.

Then, I forewarn them that I will be making a great number of questions. Afterwards, I take extensive notes and start crafting the strategy to build up rapport with this customer.

Taking all the information given informally in advance by the client, I make an oral presentation to assure I understood what the problem is. I also take this opportunity to capture further information and to relax everyone, while trying to win them over legitimately and transparently.

Then, if I see, for instance, that they do not know how to call/express lucidly/with accurateness their problem, I ask questions. But I also offer real-life examples of these probable problems with others clients.

The opportunity is absolutely vital to gauge the level of competency of the customer and knowledge or lack of knowledge about the issue. Passing all of that over, I start, informally, speaking of options to get the customer involved in peaking out the CHOICE (the solution) to watch for initial client’s reactions.

In my case and in many times, I must not only transfer the approaches/skills/technologies, but also institute and sustain it to the 150% satisfaction of my clients.

Those of us, involved with Systems Risk Management(*) (“Transformative Risk Management”) and Corporate Strategy are obliged to scan around for problems, defects, process waste, failure, etc. WITH FORESIGHT.

Once that is done and still “on guard,” I can highlight the opportunity (upside risk) to the client.

Notwithstanding, once you already know your threats, vulnerabilities, hazards, and risks (and you have a master risk plan, equally contemplated in your business plan), YOU MUST BE CREATIVE SO THAT “HARD WORK” MAKES A UNIQUE DIFFERENCE IN YOUR INDUSTRY.

While at practicing, do so a zillion low-cost experiments. Do a universe of Trial and Errors. Commit to serendipity and/or pseudo-serendipity. In the mean time, and as former UK Prime Minister Tony Blair says: “EDUCATION, EDUCATION, EDUCATION.”

(*) It does not refer at all to insurance, co-insurance, reinsurance. It is more about the multidimensional, cross-functional management of business processes to be goals and objectives compliant."

Posted by Andres Agostini at February 23, 2008 4:56 PM
Simplicity to Capture Professional, Entrepreneurial Success in Century 21?
Steve Ballmer, Microsoft CEO, says a definite "NO." Likewise, "Everything should be made as simple as possible, but not simpler." Albert Einstein. QUESTION: How does one send a man to the Moon? REPLY: By instituting "oceans" of COMPLEXITY. No COMPLEXITY no SUCCESS. To capture SUCCESS, one must be extremely formed and educated. One that knows and grasps complexity can afterwards play around with "simplicity." Who can I ask about? RESPONSE: the Military-industrial complex. As per Eamonn Kelly, "POWERFUL TIMES," 9and CEO of Global Business Network) the USA manages almost 50% of the defense budget of the world. And everything must engender unthinkable technologies, such as Apollo, Space Shuttle, Internet. Then, after that, is passed on to the R&Ds of Universities, Corporations, etc.
Videos
Loading...



Preferred Sites:

www.AgosBlogs.blogspot.com

www.AndyBelieves.Blogspot.com

www.AndyBelieves2.blogspot.com

www.AndresAgostini.blogspot.com

www.youtube.com/watch?v=tOHiKT127DM

No comments: