Tuesday, January 31, 2006

Greenspan: The worst Fed chief ever


Greenspan: The worst Fed chief ever

The Fed chairman thinks the central bank has done a fabulous job during his tenure. I beg to differ. Let's set the record straight.

By Bill Fleckenstein

Alan Greenspan gave a speech last year titled "Economic Flexibility." It should have been called "Damn, I'm Good," because the world's biggest serial bubble blower -- and most incompetent, irresponsible Fed chairman of all time -- tried to rewrite history. This column will endeavor to set the record straight.

At least he was nice enough to organize his speech so that the majority of objectionable material fell into seven or eight consecutive paragraphs, as he tried to set up Ben Bernanke to be the fall guy for all of the problems that Greenspan and the rest of the yes-men at the Fed have precipitated. Bernanke, chairman of the President's Council of Economic Advisers, will replace Greenspan as Federal Reserve chairman after the Federal Open Market Committee's Jan. 31 meeting.

He's got the dates but not the cause
I'll turn first to his brief 1990s synopsis, in which Greenspan claimed: "Yet the significant monetary tightening of 1994 did not prevent what must by then have been the beginnings of the bubble of the 1990s. And equity prices continued to rise during the tightening of policy between mid-1999 and May 2000."

His observation of when the mania really took hold and mine are exactly the same. It did start in late 1994. Of course, as with everything, he recognizes the end result but has absolutely no clue as to its cause. The reason for the continued rise in equity prices was that the Fed panicked in mid-1995 and reversed its tightening course after Orange County (and other leveraged entities) blew up. Next, the Fed bailed out the Asian crisis in 1997, Long-Term Capital Management in 1998 and fears of Y2K problems in late 1999.

Continuing on, he notes: "Indeed, the equity market's ability to withstand periods of tightening arguably reinforced the bull market's momentum." No, it was his endless bailouts that caused folks to believe in the notion of a "Greenspan put." Purely and simply, it was his practice of bailouts and market-cheerleading (which reached fevered pitch at the peak) that turned the boom to bubble.

Next, he follows up with this incredible statement: "The FOMC knew (my emphasis) that tools were available to choke off the stock-market boom, but those tools would only have been effective if they undermined market participants' confidence in future stability." To which I say: Correct, that is the idea. From time to time, you have to take away the punchbowl. But just remember that term "tools," because we'll see some examples shortly.

On to his summation of the aforementioned statements: "Market participants, however, read the resilience of the economy and stock prices in the face of monetary tightening as an indication of undiscounted market strength."

That's his lame excuse for why the market went up. Wrong.

High-tech is still the scapegoat
He then next turns to the dilemma the poor Fed was in: "By the late 1990s, it appeared to us that very aggressive action (my emphasis) would have been required to counteract the euphoria that developed in the wake of. ?

To finish that thought, Greenspan resorted to the cheerleading that he used at the height of the mania and then laid blame at the foot of "?extraordinary gains in productivity growth spawned by technological change," rather than his own bubble-blowing. As I read that, I am laughing, because it's just remarkable how he's still trying to insinuate that we were in a "new era." And that's what drove up stock prices, as opposed to his incompetence.

Highly allergic to accountability
His next comment: "In short, we would have needed to risk precipitating a significant recession, with unknown consequences. The alternative was to wait for the eventual exhaustion of the forces of boom." Got that? It was these unknown forces of boom -- not the Fed -- that precipitated the bubble. He followed up by saying: "We concluded that the latter course was by far the safer." What he means: We realized it was a bubble, but we didn't care because we assumed we could fix it.

So, the Fed understood the reality of the bubble while it was going on (though the Fed claimed not to at the time, a subject I discussed in my daily column last March). Nevertheless, Al said: "Relying on policymakers to perceive when speculative asset bubbles have developed and then to implement timely policies to address successfully these misalignments in asset prices is simply not realistic."

You read that right. It can't be done. It's impossible. Now, of course, it wasn't impossible. I wrote about it until I was blue in the face. Most people with an ounce of common sense knew there was a bubble under way. And, by what I've already shown, the Fed knew, too. And yet, Greenspan is still trying to say that it would be unrealistic to attempt to identify bubbles.

In addition, he was ready to hide behind another excuse: "It is difficult to suppress growing market exuberance when the economic environment is perceived as more stable? See? It's just too hard, and, as he already said: "We would have needed to risk precipitating a significant recession, with unknown consequences."

Rusting tools in Greenspan's garage
Even if any of his protestations were true (which I don't believe) and the Fed was afraid of damaging the economy, it has been granted specific tools to deal with periods of speculation. Among them: Regulation T, whereby margin requirements can be raised to reduce risk and change market psychology. (While raising margin requirements to even 100% may or may not have been sufficient to break the stock bubble, the Fed could have at least tried. If that failed, the Fed could then have tightened.) However, for Greenspan to pretend that all he could have done was to raise rates shows that either he doesn't know what the Fed's tools are (i.e., he's clueless) -- or he's not being truthful.

The Fed could also ask Congress to resuscitate the old Regulation X. Part of the Defense Production Act of 1950, this regulation let the Fed set minimum downpayments and maximum mortgage-repayment periods for residential properties. The Fed gave up the authority a few years later.

Of course, when Greenspan wails about not wanting to hurt the economy with rate hikes, none of his lapdogs in the press ever seem to question why the Fed hasn't used the tools at its disposal.

In any case, part of my reason for re-titling Greenspan's speech is due to the following comment: "After the bursting of the stock market bubble in 2000, unlike previous periods following large financial shocks, no major financial institution defaulted, and the economy held up far better than many had anticipated." And we all lived happily ever after.

Crowing belied by cutting
What I'd like to know is: If this was all so benign, why did he and helicopter copilot Ben Bernanke panic -- to the tune of 13 rate cuts, all the way down to 1% -- about the possibility of deflation in 2001 as the stock bubble unwound? Were it not for the even bigger, more dangerous housing bubble that Greenspan has in turn precipitated, which has only postponed the inevitable, the fallout would have been commensurate with the size of the boom.

He is right about no major financial institutions having defaulted -- though we did happen to lose Enron, WorldCom and Arthur Andersen in the process. But it was largely an equity-induced mania. And, as I've said many times, it did not leave behind a wave of bad debts. The housing bust will do just that.

Culpability, thy name is Greenspan
So, the fallout from the housing boom, the unfinished business from the stock boom and all the derivatives he's championed for his beloved deregulated financial system will combine to hit with full force somewhere down the road. By then, of course, Greenspan will be long gone. He, as well as everyone else who's incapable of understanding what really happened, will be blaming our problems on the next Fed chairman. I have no sympathy for Ben Bernanke. But we must understand what actually took place and not let this arrogant buffoon Greenspan get away with his attempt to rewrite history.

And that, ladies and gentlemen, is the last I'll have to say about Alan Greenspan, once and for all -- until he makes me really mad.

An earlier version of this column was published Oct. 4, 2005.

Bill Fleckenstein is president of Fleckenstein Capital, which manages a hedge fund based in Seattle. He also writes a daily Market Rap column on his Fleckenstein Capital Web site. His investment positions can change at any time. Under no circumstances does the information in this column represent a recommendation to buy, sell or hold any security.

Wednesday, January 25, 2006

Iran’s Oil Exchange Threatens the Greenback

Iran’s Oil Exchange Threatens the Greenback
by Mike Whitney
www.dissidentvoice.org
January 24, 2006

The Bush administration will never allow the Iranian government to open an oil exchange (bourse) that trades petroleum in euros. If that were to happen, hundreds of billions of dollars would come flooding back to the United States crushing the greenback and destroying the economy. This is why Bush and Co. is planning to lead the nation to war against Iran. It is straightforward defense of the current global system and the continuing dominance of the reserve currency, the dollar.

The claim that Iran is developing nuclear weapons is a mere pretext for war. The NIE (National Intelligence Estimate) predicts that Iran will not be able to produce nukes for perhaps a decade. So too, IAEA chief Mohammed ElBaradei has said repeatedly that his watchdog agency has found “no evidence” of a nuclear weapons program.

There are no nuclear weapons or nuclear weapons programs, but Iran’s economic plans do pose an existential threat to America, and not one that can be simply brushed aside as the unavoidable workings of the free market.

America monopolizes the oil trade. Oil is denominated in dollars and sold on either the NYMEX or London’s International Petroleum Exchange (IPE), both owned by Americans. This forces the central banks around the world to maintain huge stockpiles of dollars even though the greenback is currently underwritten by $8 trillion of debt and even though the Bush administration has said that it will perpetuate the deficit-producing tax cuts.

America’s currency monopoly is the perfect pyramid scheme. As long as nations are forced to buy oil in dollars, the United States can continue its profligate spending with impunity. (The dollar now accounts for 68% of global currency reserves up from 51% just a decade ago) The only threat to this strategy is the prospect of competition from an independent oil exchange, forcing the faltering dollar to go nose-to-nose with a more stable (debt-free) currency such as the euro. That would compel central banks to diversify their holdings, sending billions of dollars back to America and ensuring a devastating cycle of hyperinflation.

The effort to keep information about Iran’s oil exchange out of the headlines has been extremely successful. A simple Google search shows that NONE of the major newspapers or networks has referred to the upcoming bourse. The media’s aversion to controversial stories which serve the public interest has been evident in many other cases, too, like the fraudulent 2004 presidential elections, the Downing Street Memo, and the flattening of Falluja. Rather than inform, the media serves as a bullhorn for government policy, manipulating public opinion by reiterating the specious demagoguery of the Bush administration. As a result, few people have any idea of the gravity of the present threat facing the American economy.

This is not a “liberal vs. conservative” issue. Those who’ve analyzed the problem draw the very same conclusions: if the Iran exchange flourishes the dollar will plummet and the American economy will shatter.

Here is what author Krassimir Petrov, Ph.D in economics, says in a recent article, “The Proposed Iranian Oil Bourse”:

    From a purely economic point of view, should the Iranian Oil Bourse gain momentum, it will be eagerly embraced by major economic powers and will precipitate the demise of the dollar. The collapsing dollar will dramatically accelerate U.S. inflation and will pressure upward U.S. long-term interest rates. At this point, the Fed will find itself between …between deflation and hyperinflation-it will be forced fast either to take its “classical medicine” by deflating, whereby it raises interest rates, thus inducing a major economic depression, a collapse in real estate, and an implosion in bond, stock, and derivative markets, with a total financial collapse, or alternatively, to take the Weimar way out by inflating, whereby it pegs the long-bond yield, raises the Helicopters and drowns the financial system in liquidity, bailing out numerous LTCMs and hyperinflating the economy.

    No doubt, Commander-in-Chief Ben Bernanke, a renowned scholar of the Great Depression…, will choose inflation. …The Maestro has taught him the panacea of every single financial problem-to inflate, come hell or high water. …To avoid deflation, he will resort to the printing presses…and, if necessary, he will monetize everything in sight. His ultimate accomplishment will be the hyperinflationary destruction of the American currency …

So, raise interest rates and bring on “total financial collapse” or take the “Weimar way out” and cause the “hyperinflationary destruction of the American economy.”

These are not good choices, and yet, we’re hearing the same pronouncements from right-wing analysts. Alan Peter’s article, “Mullah’s Threat not Sinking In”, which appeared in FrontPage Magazine.com, offers these equally sobering thoughts about the dangers of an Iran oil-exchange:
    A glut of dollar holdings by Central Banks and among Asian lenders, plus the current low interest rate offered to investor/lenders by the USA has been putting the dollar in jeopardy for some time… A twitching finger on currency's hair-trigger can shoot down the dollar without any purposeful ill intent. Most estimates place the likely drop to "floor levels" at a rapid 50% loss in value for a presently 40% overvalued Dollar.

The erosion of the greenback’s value was predicted by former Fed chief Paul Volcker, who said that there is a “75% chance of a dollar crash in the next 5 years.”

Such a crash would result in soaring interest rates, hyperinflation, skyrocketing energy costs, massive unemployment and, perhaps, depression. This is the troubling scenario if an Iran bourse gets established and knocks the dollar from its lofty perch. And this is what makes the prospect of war, even nuclear war, so very likely.

Peter’s continues:

With economies so interdependent and interwoven, a global, not just American Depression would occur with a domino effect throwing the rest of world economies into poverty. Markets for acutely less expensive US exports would never materialize.

The result, some SME's estimate, might be as many as 200 million Americans out of work and starving on the streets with nobody and nothing able to rescue or aid them, contrary to the 1920/30 Great Depression through soup kitchens and charitable support efforts.

Liberal or conservative, the analysis is the same. If America does not address the catastrophic potential of the Iran bourse, Americans can expect to face dire circumstances.

Now we can understand why the corporate-friendly media has omitted any mention of new oil exchange in their coverage. This is one secret that the boardroom kingpins would rather keep to themselves. It’s easier to convince the public of nuclear hobgoblins and Islamic fanatics than to justify fighting a war for the anemic greenback. Nevertheless, it is the dollar we are defending in Iraq and, presumably, in Iran as well in the very near future. (Saddam converted to the euro in 2000. The bombing began in 2001)

There are peaceful solutions to this dilemma, but not if the Bush administration insists on hiding behind the moronic deception of terrorism or imaginary nuclear weapons programs. Bush needs to come clean with the American people about the real nature of the global energy crisis and stop invoking Bin Laden and WMD to defend American aggression. We need a comprehensive energy strategy, (including government funding for conservation projects, alternative energy-sources, and the development of a new line of “American-made” hybrid vehicles) candid negotiations with Iran to regulate the amount of oil they will sell in euros per year (easing away from the dollar in an orderly way) and a collective “international” approach to energy consumption and distribution (under the auspices of the UN General Assembly)

Greater parity among currencies should be encouraged as a way of strengthening democracies and invigorating markets. It promises to breathe new life into free trade by allowing other political models to flourish without fear of being subsumed into the capitalist prototype. The current dominance of the greenback has created a global empire that is largely dependent on debt, torture, and war to maintain its supremacy.

Iran’s oil bourse poses the greatest challenge yet to the dollar-monopoly and its proponents at the Federal Reserve. If the Bush administration goes ahead with a preemptive “nuclear” strike on alleged weapons sites, allies will be further alienated and others will be forced to respond. As Dr. Petrov says, “Major dollar-holding countries may decide to quietly retaliate by dumping their own mountains of dollars, thus preventing the U.S. from further financing its militant ambitions.”

There is increasing likelihood that the foremost champions of the present system will be the very one’s to bring about its downfall.

Mike Whitney lives in Washington state, and can be reached at: fergiewhitney@msn.com.

Fusion for Energy:

Fusion for Energy: Plasma Confinement, Bubble Collapse, and Laser Beams
At this point there are a number of avenues of fusion energy research, but currently the magnetic plasma confinement (i.e. tokamak), bubble collapse (i.e. sonofusion), and laser ignition (i.e. inertial confinement) methods are receiving the most attention. Laser and sonofusion techniques rely on lasers and acoustic bubble collapse, respectively, to produce the necessary temperature and pressure for nuclear fusion. Laser ignition research is currently being pursued at the Lawrence Livermore National Laboratory's National Ignition Facility (on wikipedia).

Sonofusion, an off-shoot of sonoluminescence research begun in the 1990s hypothesizes that fusion occurs when bubbles generated by acoustic waves in fluid solutions implode violently (image of sonoluminescence credit: Kenneth S. Suslick UIUC). Magnetic plasma confinement fusion research began in the 1970s and has since improved to the point where significant amounts of energy can be produced at almost the same level as energy is input to keep the fusion going. In light of these successes and the preliminary state of research in the other areas, I'll focus the rest of this entry on magnetic plasma confinement.

The two fusion techniques seemingly most capable of producing excess energy useful for power generation are the laser-ignition and plasma confinement methods . Both methods require massive infrastructure, enormous startup costs, and plenty of opportunity for high-profile failure. The laser-ignition method relies on focusing hundreds of very high power lasers on a tiny pellet of deuterium that then implodes on itself in a fashion not unlike that in a hydrogen bomb. The image on the right is of the 10-meter target chamber at the National Ignition Facility. The combined output of these lasers for the brief pulses they are active is over 1 Petawatt (1 x 10^15 watts). Plasma confinement methods require enormous magnetic fields on the order of 20 Tesla (thousands of times stronger than Earth's field) and very large and complicated tritium breeding systems, neutron absorbing blankets, and associated facilities. Government research will need to reduce the costs of these methods by several orders of magnitude before they become commercially viable, and international cooperation is seen as the only means to share the expenses.

Tuesday, January 24, 2006

Magnetic Plasma Confinement: Tokamaks and ITER


Tokamak fusion reactors have long been considered the most likely means of achieving practical nuclear fusion energy. Their basic design is a torus (or a donut) within which intense magnetic fields confine very hot plasma. There is a long list of tokamak reactor experiments, but the three of note are the biggest and most recent, the TFTR, JET, and ITER tokamaks. The Tokamak Fusion Test Reactor (TFTR) at Princeton University generated the highest temperature and set what was in 1994 a world record for energy generation. The Joint-European Torus (JET), pictured above, currently holds the record for the most energy generated by a controlled fusion reaction. The International Thermonuclear Experimental Reactor (ITER), still in the design phase, holds hope of being the first break-even nuclear fusion reactor in the world. (image credit: ITER)

Like any international scientific and engineering collaboration expected to cost tens of billions of dollars, ITER has been long in planning. First proposed around 1985 as an international diplomatic research effort, final agreement on a construction site was not reached until June of 2005. Construction is expected to begin in 2008 and finish in 2016. ITER is designed to generate 500 MW (about 10 times the record held by JET) and will hopefully produce more energy than is required to keep the plasma heated and confined. The success of the project is by no means guaranteed, however, and many of the criticisms surrounding it have focused on the technical challenges. Other criticisms have noted that the neutrons released in the deuterium-tritium fusion would create secondary radiation within the metallic parts of the reactor chamber. This secondary radiation would create radiological waste disposal problem, and would also shorten the life of the components in the reactor through radiative metal fatigue.

If ITER is largely successful, research done there over the years between 2015 and 2035 will show us what yet needs developing before commercial nuclear fusion is feasible. Following ITER, a hypothetical successor is project DEMO which will produce commercial nuclear fusion energy for the first time. If ITER is unsuccessful for technical reasons, the timeline for nuclear fusion will likely be driven even further back, keeping the "30 years away" projection true no matter when it is said.

Sunday, January 22, 2006

COLD FUSION: Myth or Fact?

COLD FUSION TURNS UP THE HEAT-FACT OR FRICTION?
Myths and Facts of Cold Fusion / Condensed Matter Nuclear Science
From http://www.newenergytimes.com/PR/CFMythsFacts.htm

"Myths and Facts of Cold Fusion / Condensed Matter Nuclear Science" was presented on 26 August, 2005 to the International Conference on Emerging Nuclear Energy Systems, in Brussels, Belgium as part of the paper, "How Can Cold Fusion Be Real, Considering It Was Disproved By Several Well-Respected Labs In 1989?"
Paper: http://newenergytimes.com/Library/2005KrivitS-HowCanItBeReal-Paper.pdf
Presentation: http://newenergytimes.com/Library/2005KrivitS-HowCanItBeReal-Presentation.pdf
Audio Recording: http://newenergytimes.com/Audio/2005KrivitS-ICENES-2005.mp3

Myth 1: Cold fusion is "not reproducible." An effect is reproducible if it happens “more often than not." (Richard Garwin, IBM )
Fact 1: In the early 1990s, the rate of reproducibility was very low. As of 2003, cold fusion shows 83% average reproducibility, with some reports of 100% reproducibility [26].

Myth 2: “Nobody in mainstream science” is researching cold fusion. Mainstream scientists are those "who work in universities.” (Frank Close, Rutherford Appleton Laboratory)
Fact 2: Several dozen university scientists have been, or are researching cold fusion [27].

Myth 3 : Cold fusion is “impossible according to current nuclear theory.” (John Huizenga, Chair, 1989 Department of Energy Cold Fusion Panel)
Fact 3: That was true in 1989, but it no longer is [28].

Myth 4: "The claim that cold fusion is a nuclear process producing excess power without commensurate nuclear reaction products, is pathological science." (John Huizenga)
Fact 4: The pathology ended when proportional amounts of reaction products were discovered in the early 1990s, which demonstrated conformance with the first law of thermodynamics [29].

Myth 5: Cold fusion is false because there are no significant neutrons. “There is no reason to think that the branching ratios would be different for cold fusion” than with hot fusion. (John Huizenga)
Fact 5: Cold fusion is not a colder form of hot fusion. The assumption that cold fusion should follow hot fusion branching ratios is erroneous [30].

Myth 6: No “hard evidence” supports the claims of cold fusion. (Frank Close)
Fact 6: Evidence exists for 4He, 3He, tritium, transmutation and charged particles [31].

Myth 7: Only a “dwindling band of true believers” studies cold fusion. (Robert Park, American Physics Society)
Fact 7: ~200 researchers in 13 countries are actively researching cold fusion [32].

Myth 8: Calorimetry is unreliable.
Fact 8: Many calorimeters applied to cold fusion are accurate to ±50 mW. Energy in excess of 1000 mW is frequently measured [33]. Calorimetry has been a common and trusted tool for electrochemists for over 200 years.

Myth 9: “The fact of the matter is Pons & Fleischmann's experiment never did demonstrate any excess heat. ... It was nothing more than experimental error.” (Lee Hansen, Brigham Young University) Another related myth is that all of the claims of excess heat from the last 16 years of research are all the result of operator error.
Fact 9: Wilford Hansen, of Utah State University, in a report to the state of Utah, verified the excess heat claims of Fleischmann and Pons [10]. Hundreds of observations, using a variety of calorimeters, have been made. It is unlikely that they are all erroneous [34].

Myth 10 : Cold fusion “is a simple chemical reaction that has nothing to do with fusion." (Nathan Lewis, Caltech)
Fact 10: Energy generation starts too quickly to result from storage. No specific chemical explanation has been offered for the anomalous heat. The excess heat effect is too large to be of chemical origin. Infrared microscope/ thermographs measure nanoscale hot spots that are hotter than any known chemical heat source. [35].

Myth 11: Cold fusion papers have not been published in peer-reviewed journals.
Fact 11: More than 55 peer-reviewed journals have published cold fusion papers [36].

Myth 12: If cold fusion were “a real phenomenon it would have emerged and be on the way to exploitation.” (Richard Garwin)
Fact 12: Many scientific endeavors are valid but not yet commercially viable including thermonuclear fusion energy [37].

Myth 13: Fleischmann and Pons were incompetent. "Just by looking at these guys on television, it was obvious that they were incompetent fools,” (William Happer, Princeton Plasma Physics Laboratory, former head of the U.S. Dept. of Energy Office of Energy Research)
Fact 13: A refined image does not necessarily correlate with scientific competency [38]. Fleischmann and Pons were poorly prepared by the University of Utah administration for the press conference [39]. Being scientists, not performers, they were ill-prepared for the McNeil/Lehrer TV news show later that day, and their discomfort and unease was evident. They were asked silly questions such as "You did this in the kitchen, right?" by correspondent Charlene Hunter-Gault. Fleischmann was also very worried about other scientists' safety and was concerned that they might inadvertently replicate the "meltdown" experiment and cause fatalities as a result of the news interview.

Myth 14: Fleischmann and Pons were working "outside of their field of expertise." (John Huizenga)
Fact 14: Fleischmann and Pons were among the world's top electrochemists and were experts in their craft and pioneers in a significant new field of science [40].

Myth 15: Fleischmann and Pons "circumvented the normal peer review process." (John Huizenga)
Fact 15: Fleischmann and Pons did not announce their findings before the acceptance of their paper in a peer-reviewed journal [41].

Myth 16: No qualified scientists are convinced of the general phenomena of cold fusion.Fact 16: Dozens of qualified scientists in universities and government laboratories are convinced that the claims of excess heat and transmutation in "cold fusion" research are valid [42].

Myth 17: Fleischmann and Pons observed large quantities of excess heat quickly after turning on their cold fusion cell.
Fact 17: In the early years of cold fusion research, initiation time often took hundreds of hours.

Myth 18: The original cold fusion experiment was "ridiculously simple." ( Fleischmann and Pons)
Fact 18: Not true. It was, and still is, highly complex.

Myth 19: Cold fusion cannot be used for destructive purposes.
Fact: 19: Mankind always seems to find ways to use portable, high-density energy sources for destructive as well as constructive purposes.

Myth 20: Fleischmann and Pons were "incompetent and delusional." (Steven Koonin, Caltech)
Fact 20: The final chapter on cold fusion has not been written. It is yet to be known who was thinking clearly and who was not.

Myth 21: Cold fusion is a "fraud." (Ronald Parker, MIT)
Fact 21: Parker retracted his comment in a press release several days later.

Myth 22: Working cold fusion devices will be available soon. "Prototype cold fusion home heating units are widely expected to emerge this year or next." (Eugene Mallove, 1993)
Fact 22: 12 years later, the only unit to emerge is Dennis Cravens' (Eastern New Mexico University) experimental calorimeter and cold fusion cell which heats up his laboratory.

Myth 23: Cold fusion will provide an inexpensive, inexhaustible source of energy for the entire world.
Fact 23: This is only the hope. The future is unknown.

Thursday, January 19, 2006

Treading the light fantastic: Einstein challenged


Treading the light fantastic: Einstein challenged
By John Huxley
January 19, 2006

SOME things never change. But now one of science's most cherished constants is being challenged by a controversial young cosmologist and a crack team of Sydney researchers.

According to Albert Einstein's theory of relativity, enunciated a century ago, the speed of light - the "c" in his famous equation e=mc 2 - has been a constant 299,792,458 metres a second since the universe began with the Big Bang. Dr Joao Magueijo thinks Einstein, who did have second thoughts but never pursued them, got it wrong. He believes that not long after the Big Bang light hit a "speed bump" and is, in fact, slowing down.

His theory - published first in the scientific press and then in a popular book, Faster than the Speed of Light - The Story of a Scientific Speculation, was initially greeted with derision.

"That is not so surprising," says Portuguese-born Dr Magueijo, 38, who tonight will give a public talk on his theory at the University of NSW. "This is an emotional issue. We are attacking one of the pillars of modern physics."

Whether it comes crashing down will depend largely on research now being carried out at the university by a six-member team led by Professor John Webb and Dr Michael Murphy, one of his former PhD students.

They met Dr Magueijo several years ago by chance when he was visiting Sydney on holiday with his Australian girlfriend. They decided to put the theory to the test by revisiting the birth of time.

Their research "factory" uses data from the world's biggest optical telescope, the Very Large Telescope at the Paranal Observatory in northern Chile, to study light from distant quasars more than a trillion times brighter than the sun. "The light that comes from a quasar [compact sources of intense light powered by black holes] has been travelling for most of the age of the universe - several billion years," Dr Murphy said. "It carries with it information about what happened to it along the way."

Professor Webb says he retains an open mind, but hopes the research will make or break Dr Magueijo's theory. "It's fair to say we are involved in a space race with other groups round the world. But we believe we have the best and the most data.

"The astonishing reality is that we can now measure events that occurred many billion of years ago just as precisely as those we measure here and now in the University of NSW laboratories."

Speed of Light is Constant?

Relativity points out that the mass of an object increases as its speed accelerates, and when it reaches the assumed constant speed of light in a vacuum, its mass become infinite.

Hence, no object or no one can travel beyond the speed of light, or the assumed upper limit of speed. To travel to the nearest star from Earth, the Proxima Centauri of the Alpha Centauri system, which is 4.2 light years away, will take man a lifetime to reach, even with a spaceship with reasonable speed less than the speed of light. So man will forever be a prisoner in his insignificant solar system in the infinite universe.

Once the Earth’s resources are used, the humans, in all probability, will resort to cannibalism again in fulfillment of Jeremiah’s prophecy. And the Earth’s situation will be like that of the people on Easter Island. There is strong evidence that the long and short ears ate each other nearly to the point of extinction. Stuck on this planet and hopeless like the Easter Islanders while the stars beacon far out there. Then what is the purpose and reason for all those countless billions of star systems to be up there? Are those stars just there for nothing? I don’t think so.

If man can see a star, he can reach it. To be able to do it, he has to reorient his thinking and theories. The first step is to modify or replace the theory of relativity with another theory without a finite boundary or finite limiting speed condition, but instead should incorporate infinite possibilities and can range even to eternity.

God is the creator of light and the infinite universe. If relativity is correct, then God is perceivable and finite, both in power and existence. He is limited by His creation because He cannot even go beyond the speed of light, and it will take Him an eternity to cross the universe. Also, all God’s creations would be finite, including the universe. All these contradict God’s infinite power and his unperceivable eternal existence. Incredible contradictions indeed. Thus, relativity also makes believing in God pointless. But is this really so? Consider some of the recent findings in science.

Using the world’s largest telescope, the Keck telescope atop Maura Kea, Hawaii, a team of experimentalists led by John Webb, a professor of New South Wales in Sydney, Australia have observed from their collected data that light from a distant quasar has patterns of light absorption that could not be explained without assuming a change in a basic constant of nature called the fine structure constant, a combination of three other universal constants: (a) the law that electron charge shall not change, (b) the speed of light shall not change or is constant, and (c) Plank’s constant. Paul Davies of Sydney Macquarie University said that the discrepancy could only be explained if there is a change in either (a) or (b). If (b) is correct; that is, the speed of light is constant, then (a) is wrong, which violates the sacrosanct second law of thermodynamics. (The second law of thermodynamics implies that you cannot get something from nothing.) This is unacceptable. So the only alternative is that (a) is correct and (b) is wrong. That is, light is not constant and preserves the second law of thermodynamics. The conclusion is also supported by Davies’ team from data from their studies of black holes. Moreover, Davies said that the variability of the speed of light is due to the possibility that the speed has slowed by time over billions of years.

Light is slowed down in transparent media, such as air, water, glass, plastic, and diamond. The ratio by which it is slowed is called the refractive index of the medium and is always greater than one. This was discovered by Jean Foucault in 1850.

The physics team led by Lene Vestergaard Hau, a researcher at Rowland and Harvard University, used Bose-Einstein condensate to slow laser light down to 38 mph. And the team is getting new lasers in the lab that should enable them to slow the speed of light to 120 ft/hr, and possibly achieve light speed of almost zero.

As shown in the preceding paragraphs, the speed of light in the upper (in a vacuum) and lower (in a medium) limits can change, especially in the upper limit which contradicts the hypothetical assumption in relativity that light speed is constant. By the way, in physics, "the speed of light," in general, means the speed of light in a vacuum. But is there really such a thing as an absolute vacuum in the universe? Vacuum is such an ideal space which in reality does not exist. Aside from gravity, there is always something in there, an unknown medium that no one knows, that decelerates the speed of light. Our knowledge of the infinite universe, comparatively speaking, is less than a dot on this page. The things humans don’t know is still limitless. Thus, humans can be compared to the insignificant microbes under the belly of a carabao who arrogantly think that they are greater than the carabao and who define a relativistic speed for the carabao.

The hypothetical assumption that "light is constant" is the foundation of relativity. Utilizing Michelson-Morley experiment and Lorentz invariance equations, an equation can be formulated and when the assumed constant speed of light "c" is substituted, it is simplified and reduced into the now famous equation E=mc^2. However, if the foundation of anything is wrong and unstable, by time the whole structure will collapse, and so is the theory of relativity.

The discoveries of the team of Australian scientists, though revolutionary, are still FAR SHORT and LESS CONSISTENT with God’s perceivable attributes. Anyway, in the Bible, God is the source of light and dynamic energy. He is vigorous, infinite, and almighty in power. At the instant light emanates from Him, "the light speed is tremendous and infinite." Thus, "the upper limit of the speed of light is infinite, not a finite constant." But "light decelerates, regardless of time, as it traverses the different regions of the universal space, from multi-dimensional superhyperspaces to hyperspaces, then to three-dimensional spaces." Note that scientists and mathematicians formulated the multi-dimensional super string theory without knowing that the Bible writers have written about similar ideas ahead of their time by more than a thousand years. God’s angel span the hyperspaces and the three-dimensional spaces at staggering speed beyond human imagination and multiple times over beyond light speed. Thus, they also contradict the relativistic limitation that nothing can travel beyond the speed of light. Actually, it is also possible for a material particle to have speed greater than the speed of light in a medium. The phenomenon of Cherenkov radiation is a good example. If this particle is faster than light in a medium, there is a possibility that it might be faster than light in a vacuum, assuming that light speed is not constant. Also, the scalar waves travel faster than the speed of light. Nevertheless, space travel to the stars and beyond is implied as granted to man by God to be among the stars when He told Abraham to look up to the heavens and count the stars if he can.

Finally, space travel with speed beyond the speed of light is attainable only if man reorients his thinking and come up with an alternative theory or revise the theory of relativity. He must FIRST accept that "light is NOT constant." SECOND, he must look at things from an infinite point of view, not a finite point of view. THIRD, he must not settle for a finite boundary or a finite limiting speed condition. FOURTH, Newton’s first law of motion which states that "a body at rest remains at rest, and a body in motion remains in motion with CONSTANT velocity along the same straight line unless acted upon by some resultant force" must now be made a corollary instead, and generally RESTATED as "a body at rest remains at rest, and a body in motion remains in motion with INFINITE velocity along the same straight line unless acted upon by some resultant force TO DECELERATE AND CHANGE DIRECTION." And FIFTH, SIXTH, AND SEVENTH are the ones in close quotations in the preceding paragraph. All these seven changes and conditions are still nothing but they are closer in approximation than those ideas arrived at by the Australian team about God’s perceivable attributes as defined in the Bible. If these seven theoretical ideas are backed up by new engineering discoveries, then travel to the stars with speed beyond the speed of light is attainable. But how?

The Boundaries of Reality: Digital Matrix "Hyperspaces"

Quantum Physics:
The Boundaries of Reality
by Chuck Missler

The startling discovery of modern science is that our physical universe is actually finite. Scientists now acknowledge that the universe had a beginning. They call the singularity from which it all began the "Big Bang."

While the details among the many variants of these theories remain quite controversial, the fact that there was a definite beginning has gained widespread agreement.1 This is, of course, what the Bible has maintained throughout its 66 books.

From thermodynamic considerations, it also appears that all processes in the universe inevitably contribute the losses from their inefficiencies to the ambient temperature, and thus the universe ultimately will attain a uniform temperature in which no work (all of which ultimately derives from temperature differences) will be able to be accomplished. Scientists call this ultimate physical destiny the "heat death."

Mankind, therefore, finds itself caught in the finite interval between the singularity that began it all and a finite termination. The mathematical concept of infinity - in any spatial direction or in terms of time - seems astonishingly absent in the physical macrocosm, the domain of the astronomers and cosmologists.

In the microcosmic domain, there also appears to be an even more astonishing boundary to smallness. If we take a segment of length, we can divide it in half. We can take one of the remaining halves, and we can divide it in half again. We naturally assume that this can go on forever. We assume that no matter how small a length we end up dealing with, we can always - at least conceptually - divide any remainder in half. It turns out that this is not true. There is a length known as the Planck length, 10-33 centimeters, that is indivisible.

The same thing is true of mass, energy, and even time. There is a unit of time which cannot be subdivided: 10-43 seconds. It is in this strange world of subatomic behavior that scientists have now encountered the very boundaries of physical reality, as we experience it. The study of these subatomic components is called quantum mechanics, or quantum physics.

The startling discovery made by the quantum physicists is that if you break matter into smaller and smaller pieces you eventually reach a point where those pieces - electrons, protons, et al. - no longer possess the traits of objects. Although they can sometimes behave as if they were a compact little particle, physicists have found that they literally possess no dimension.

Another disturbing discovery of the physicists is that a subatomic particle, such as an electron, can manifest itself as either a particle or a wave.

If you shoot an electron at a television screen that has been turned off, a tiny point of light will appear when it strikes the phosphorescent chemicals that coat the glass. The single point of impact which the electron leaves on the screen clearly reveals the particle-like side of its nature.

But that is not the only form the electron can assume. It can also dissolve into a blurry cloud of energy and behave as if it were a wave spread out over space. When an electron manifests itself as a wave, it can do things no particle can. If it is fired at a barrier in which two slits have been cut, it can go through both slits simultaneously. When wavelike electrons collide with each other they even create interference patterns.

It is interesting that in 1906, J. J. Thomson received the Nobel Prize for proving that electrons are particles. In 1937 he saw his son awarded the Nobel Prize for proving that electrons were waves. Both father and son were correct. From then on, the evidence for the wave/particle duality has become overwhelming.

This chameleon-like ability is common to all subatomic particles. Called quanta, they can manifest themselves either as a particle or a wave. What makes them even more astonishing is that there is compelling evidence that the only time quanta ever manifest themselves as particles is when we are looking at them.

The Danish physicist Niels Bohr pointed out that if subatomic particles only come into existence in the presence of an observer, then it is also meaningless to speak of a particle's properties and characteristics as existing before they are observed.

But if the act of observation actually helped create such properties, what does that imply about the future of science?

Anyone who isn't shocked by quantum physics has not understood it. Niels Bohr

It gets worse. Some subatomic processes result in the creation of a pair of particles with identical or closely related properties. Quantum physics predicts that attempts to measure complementary characteristics on the pair - even when traveling in opposite directions - would always be frustrated. Such strange behavior would imply that the particles would have to be interconnected in some way so as to be instantaneously in communication with each other.

One physicist who was deeply troubled by Bohr's assertions was Dr. Albert Einstein. Despite the role Einstein had played in the founding of quantum theory, he was not pleased with the course the fledgling science had taken.

In 1935 Einstein and his colleagues Boris Podolsky and Nathan Rosen published their now-famous paper, "Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?"2

The problem, according to Einstein's Special Theory of Relativity, is that nothing can travel faster than the speed of light. The instantaneous communication implied by the view of quantum physics would be tantamount to breaking the time barrier and would open the door to all kinds of unacceptable paradoxes.

Einstein and his colleagues were convinced that no "reasonable definition" of reality would permit such faster-than-light interconnections to exist, and therefore Bohr had to be wrong. Their argument is now known as the Einstein-Podolsky-Rosen paradox, or EPR paradox for short.

Bohr remained unperturbed by Einstein's argument. Rather than believing that some kind of faster-than-light communication was taking place, he offered another explanation:

If subatomic particles do not exist until they are observed, then one could no longer think of them as independent "things." Thus Einstein was basing his argument on an error when he viewed twin particles as separate. They were part of an indivisible system, and it was meaningless to think of them otherwise.

In time, most physicists sided with Bohr and became content that his interpretation was correct. One factor that contributed to Bohr's following was that quantum physics had proved so spectacularly successful in predicting phenomena, few physicists were willing to even consider the possibility that it might be faulty in some way. The entire industries of lasers, microelectronics, and computers have emerged on the reliability of the predictions of quantum physics.

The popular Cal Tech physicist Richard Feynman has summed up this paradoxical situation well:

I think it is safe to say that no one understands quantum mechanics... In fact, it is often stated that of all the theories proposed in this century, the silliest is quantum theory. Some say that the only thing that quantum theory has going for it, in fact, is that it is unquestionably correct.

When Einstein and his colleagues first made their proposal, technical reasons prevented any empirical experiments actually being performed. The broader philosophical implications were, ironically, ignored and swept under the carpet.

Hyperspaces

The ancient Hebrew scholar Nachmonides, writing in the 12th century, concluded from his studies of the text of Genesis that the universe has ten dimensions: that four are knowable and six are beyond our knowing.

Particle physicists today have also concluded that we live in ten dimensions. Three spatial dimensions and time are directly discernible and measurable. The remaining six are "curled" in less than the Planck length (10-33 centimeters) and thus are only inferable by indirect means.3

(Some physicists believe that there may be as many as 26 dimensions.4 Ten and twenty-six emerge from the mathematics associated with superstring theory, a current candidate in the pursuit of a theory to totally integrate all known forces in the universe.)

Fracture in Genesis 3?

There is a provocative conjecture that these ten (or more) dimensions were originally integrated, but suffered a fracture as a result of the events summarized in Genesis Chapter 3. The resulting upheaval separated them into the "physical" and "spiritual" worlds.

There appears to be some Scriptural basis for an original close coupling between the spiritual and physical world. The highly venerated Onkelos translation of Genesis 1:31 emphasizes that "...it was a unified order."

The suggestion is that the current physics, including the entropy laws, ("the bondage of decay") were a result of the fall.5

The entropy laws reveal a universe that is "winding down." It had to have been initially "wound up." This windup - the reduction of entropy, or the infusion of order (information) - is described in Genesis 1 in a series of six stages. The terms used in this progressive reduction of entropy (disorder) are, erev and boker, which ultimately led to their being translated "evening" and "morning."

Erev and Boker

Erev is dark, obscure randomness; it is maximum entropy. As darkness envelopes our horizon, we lose the ability to discern order or patterns. The darkness is "without form and void."

From this term we derive the current sememe for "evening," when the encroaching darkness begins to deny us the ability to discern forms,shapes, and identities.

Boker is the advent of light, where things begin to become discernible and visible; order begins to appear.

This relief of obscurity, and the attendant ability to begin to discern forms, shapes, and identities has become associated with dawn or "morning," as the early twilight begins to reveal order and design. Evening and mornings constituted the principal stages of creation. Six "evenings" and "mornings" became the "days" constituting the creation "week." However, what we know about the physical universe is only from observing the universe after the upheavals of Genesis 3.
This article was originally published in the
July 1998 Personal Update NewsJournal.


**NOTES**


1. For a more complete discussion, see The Creator Beyond Time and Space, by Chuck Missler and Mark Eastman.

2. Albert Einstein, Boris Podolsky, and Nathan Rosen, "Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?" Physical Review, 47 (1935), p.777.

3. Michio Kaku, Hyperspace, Oxford University Press, New York, 1994.

4. Strangely, 26 is the gematria of the tetragamaton, YHWH.

5. Hebrews 11:3; Romans 8:19-23; Psalm 102:25-27; Proverbs 16:33; Ephesians 1:11; Hebrews 1:2-3; Colossians 1:16,17.

Monday, January 16, 2006

The flat or chilled Universe below the hyperspace


The maximum energy density in the sense of oscillating particles that quantum vacuum can contain corresponds to any wave or particle energy That space-time can support? Below the Hyperspace at 0 J, we reach a completely flat universe with no quantum waves or zero point energy at all. Actually the zero point energy modules get transformed into strange indestructible entities that can survive and coexist in the flat Universe.

The flat Universe actually consists of infinite number of dimensions. No quantum waves, no energy fields, no kind of agitations survive in that environment. There is nothing below it. There is nothing beyond it. It is endless and truly infinite.

There are entities unknown to Physics that exists there. These are zero point energies transformed into strange entities. These entities actually form part of the Flat Universe. The chilled ort flat universe actually is responsible for all activities in the Hyperspace as well as the countless universes contained by the Hyperspace.

According to scientists the chilled Universe is very strange and cannot be defined by quantum waves or any energy or radiation form that we can visualize.

However, there are spatial structures unavailable to dimension below five that forms the flat Universe. The spatial structure of much higher dimension allows the containment of many Hyperspaces with 5-Dimensions.

The spatial structures in the flat universe create spatial projections that extend into the spatial structures of the Hyperspace. That is the means by which the flat or chilled universe communicates with the Hyperspaces. The Hyperspaces in return creates the appropriate electromagnetic and gravitational radiation to control, initiate and manipulate individual universes.

The Chilled universe can never be destroyed since in quantum sense it does not exist. Its existence is virtual and from the Physical Universe we cannot see it, feel it and ever reach it.

Cold Fusion Confirmed?

(Post #164 - Physics)

In nuclear physics, Fusion is a kind of nuclear reaction where two or more subatomic particles combine to create other subatomic particles. For example, a Deuterium (also called heavy Hydrogen, with one Proton and one Neutron) nucleus can combine with a Neutron to produce a Tritium (Heavier Hydrogen, with one proton and two neutrons) nucleus. Many such fusion reactions are exothermic, that is, a net amount of energy is released when the particles combine. The major bottleneck that prohibits us from using fusion as a energy source is that it usually requires extremely high speeds of collision (and thus high temperatures) to fuse the particles together.

Deuterium to Tritium Fusion (Courtesy: PowerFrontiers)
One controversial approach that is claimed to have successfully initiated fusion reactions at low temperatures is by collapsing bubbles containing Deuterium, using sound waves.

The collapsing bubble is supposed to generate high temperatures inside it, thereby initiating a fusion. Now latest research by scientists at Purdue University might have found solid evidence for it.

The new findings (by Yiban Xu and Adam Butt) are published in the journal Nuclear Engineering and Design.

A glass test chamber about the size of two coffee mugs was filled with a liquid called Deuterated Acetone, which is a compound that contains Deuterium atoms.

The researchers exposed the test chamber to neutrons and then bombarded the liquid with a specific frequency of ultrasound, which caused cavities to form into tiny bubbles. The bubbles then expanded to a much larger size before imploding, apparently with enough force to cause fusion reactions!!

The researchers found evidence of Tritium (a product of the fusion reaction). The experiment also yielded neutrons, whose energy was as expected for such fusion reactions. Interestingly, the same results were not seen when normal acetone (which has Hydrogen instead of Deuterium) was used, thus bolstering the findings.

If the findings are independently confirmed, this would be a watershed moment in the history of science. Not only would it make costly and behemoth constructions like the Tokamak reactor (to be built in France by a six-country alliance) superfluous, it could also result in new energy production technologies at a much earlier date. I am keeping my fingers crossed.

Sunday, January 15, 2006

DOE Warms to Cold Fusion

Whether outraged or supportive about DOE's planned reevaluation of cold fusion, most scientists remain deeply skeptical that it's real.

The cold fusion claims made in 1989 by B. Stanley Pons and Martin Fleischmann didn't hold up. But they did spawn a small and devoted coterie of researchers who continue to investigate the alleged effect. Cold fusion die-hards say their data from the intervening 15 years merit a reevaluation-- and a place at the table with mainstream science. Now they have the ear of the US Department of Energy.
"I have committed to doing a review" of cold fusion, says James Decker, deputy director of DOE's Office of Science. Late last year, he says, "some scientists came and talked to me and asked if we would do some kind of review on the research that has been done" since DOE's energy research advisory board (ERAB) looked at cold fusion nearly 15 years ago. "There may be some interesting science here," Decker says. "Whether or not it has applications to the energy business is clearly unknown at this point, but you need to sort out the science before you think about applications."

DOE is still working out the details, Decker says, but a review of cold fusion will begin in the next month or so and "won't take a long time--it's a matter of weeks or months."


Turning up the heat
Last summer, after the 10th International Conference on Cold Fusion in Cambridge, Massachusetts, participants came away energized, says the conference's organizer, MIT theorist Peter Hagelstein. About 150 people attended the conference; the number of people working on cold fusion or, as some of them prefer to call it, low- energy nuclear reactions, is perhaps several hundred worldwide, most of them outside the US. Says Hagelstein, "Everyone was convinced things would start changing. The question on the table is, Can we establish to the satisfaction of the scientific community that there is science here?"

"The field has made a huge amount of progress," Hagelstein says. "In 1989, it was not clear if there was an excess heat effect or not. Over the years, it's become clear there is one. It wasn't clear if there was a low-level emission of nuclear products. Over the years it's become clear that, yes, there is. In addition, other new effects have surfaced."

"It's either my good luck or my bad luck, but I discovered there was something worthy of pursuit," says Michael McKubre, an electrochemist at SRI International, a nonprofit research institute in Menlo Park, California. McKubre's experiments are along the lines of Pons and Fleischmann's. A typical setup consists of a palladium cathode at the center of a helical platinum anode in a solution of heavy water with lithium salt. An applied current dissociates the deuterium, and deuterons load into the palladium. Experiments take a couple of weeks and "leaving them to sit is where most of the tricks are," says McKubre. Among the tricks, he says, are loading the palladium with sufficient concentrations of deuterons and increasing the signal-to-noise ratio in heat and helium measurements. "The numbers are what you expect for two deuterons fusing to produce helium-4, with about 24 MeV per helium nucleus. There is a nuclear effect that produces useful levels of heat. I know it's true."

"With knowledge comes responsibility," continues McKubre. "We know that this has economic implications and, potentially, security implications. The main application that cold fusion enthusiasts foresee following from their work is a clean source of energy; transmutation of nuclear waste and tritium production to augment weapons are also on their list. But, says McKubre, to solve "the various problems in scaling up the effect to make it more easily studied and potentially useful, we have to involve the scientific community."

As it is, the scientific community generally shuns cold fusion. "There is pretty much no possibility for funding in the area at this time, and no possibility of getting published," says Hagelstein. "Because the area is tainted, colleagues don't want to be seen talking about it." Adds Randall Hekman, a former judge and founder of Hekman Industries, an energy exploration company in Grand Rapids, Michigan, "There seems to be a scientific McCarthyism that puts a chilling effect on anyone who gets into this field. I feel for the scientists who do this work and who are being ostracized. That's got to change."

Change is exactly what cold fusion researchers hope will follow from the DOE review: They want vindication, funding, and, with those, better chances of developing applications of cold fusion. Says Hagelstein, "If the review is done properly, it should come back with a thumbs up."


A long shot
Among scientists, skepticism about the credibility and reproducibility of cold fusion remains widespread. "Nobody is smart enough to say it is absolutely impossible, but extraordinary claims demand a very high standard of proof," says Steven Koonin, who recently took a leave from Caltech to become chief scientist at the London-based energy company BP and who served on the original ERAB panel. The best route to respectability, he says, would be for cold fusion researchers to publish in respected refereed journals. "I think a review is a waste of time," says Princeton University physicist Will Happer, another member of the earlier ERAB panel and former head of DOE's Office of Energy Research (now the Office of Science). "But if you put together a credible committee, you can try to put the issue to bed for some time. It will come back. The believers never stop believing."

And the skeptics are raising their eyebrows at DOE because of the appearance of political favors in setting up the meeting between Decker and cold fusion researchers. According to Hekman, "I am from Michigan. [Energy Secretary Spencer Abraham] is from Michigan. I know him. That opened the door." But, he adds, "we had to jump through hoops. We had to make a prima facie case first before any meeting would be set." Another Michigan connection is representative Vernon Ehlers (R-MI), a physicist by training, who says that he is "personally very skeptical" about cold fusion, but "it's likely time for a new review because there is enough work going on and some of the scientists in the arena are from respected institutions." Ehlers says that although he made an inquiry to DOE about a cold fusion review, "there was no political pressure."

Some scientists, too, are sympathetic to the cold fusion cause. "There are quite a few people who are putting their time into this. They are working under conditions that are bad for their careers. They think they are doing something that may result in some important new finding," says MIT's Mildred Dresselhaus, an ERAB panel veteran and former head of DOE's Office of Science. "I think scientists should be open minded. Historically, many things get overturned with time." Noting that DOE's science budget has not increased in years, she adds, "When you feel poor, you don't invest in long shots. This is kind of a long shot."

"The critical question is, How good and different are [the cold fusion researchers'] new results?" says Allen Bard, a chemist at the University of Texas at Austin. "If they are saying, 'We are now able to reproduce our results,' that's not good enough. But if they are saying, 'We are getting 10 times as much heat out now, and we understand things,' that would be interesting. I don't see anything wrong with giving these people a new hearing." In ERAB's cold fusion review in 1989, he adds, "there were phenomena described to us where you could not offer alternative, more reasonable explanations. You could not explain it away like UFOs."

Toni Feder April 2004 Physic Magazine

Friday, January 13, 2006

Internet TV Transcends Categories and Borders


Samsung Electronics unveils the first mobile phone for the mobile wireless Internet service WiBro on Nov. 13, 2005.

The heads of two global IT leaders said at the Consumer Electronics Show in Las Vegas last week effectively declared war on conventional broadcasting. “We’re at the threshold of a new entertainment era," Intel CEO Paul Otellini said, while Microsoft founder Bill Gates said consumers will be watching fast Internet-based TV in their living rooms imminently.

Content will come not through the television networks but via the fast Internet. To make that happen, Intel and Microsoft have formed an unprecedented union of the giants, whereby Microsoft will provide programming from DirecTV, the largest satellite broadcaster in the U.S., to customers’ personal computers.

The two one-time bitter rivals have joined hands to launch something called Triple Play Service (TPS), a package of broadcasting, Internet and phone service. Over 200 companies are in competition overseas. Because using this package is 30 to 50 percent cheaper than paying the fee for each service separately, TPS is gaining in popularity.

The U.K. is the most advanced country in the field of TPS. Over 10 percent of broadband Internet users subscribe to TPS services, and TPS provider Home Choice has secured 240,000 subscribers mainly in London. But satellite cable companies are not just sitting back. British Sky Broadcasting acquired high-speed Internet company EasyNet, and British Telecom has also joined the market, with its CEO Christopher Bland saying consumers are not interested in which network the content is coming through but in quality.

Online firms are aggressive in dismantling the wall surrounding the broadcasting industry. The world’s largest Internet search engine Google is already regarded as a potential leader bringing about a seismic shift in broadcasting. Google co-founder Larry Page said the company will start selling TV programs through Google Video, providing multimedia content by CBS and NBA basketball games at a price of about US$1.50-4.00 per episode. Google is also considering selling foreign multimedia content over the Internet -- a sign that the last limits and boundaries for TV are coming down.

Apple also provides online multimedia and music content, with an episode of the ABC hit soap opera “Lost” priced US$1.99 per episode. Yahoo has announced it is launching a service called Yahoo Go, which enables consumers to log on to Yahoo sites through computer, mobile phone or television. It will first launch a free video service with commercials and start a pay-per-view service from the end of this year.

Consumers will eventually benefit from heated competition between global companies since boundaries no longer exist for TV via Internet. International calls cost only W10-30 (1-3 cent), and search engines can be accessed anywhere.

But Korea is falling behind in the global trend because of regulations and a weak system. To launch their own Triple Play Service, communication companies here should be allowed to transmit broadcasting programs and cable TV companies should be able to make inroads into Internet phone business. However, the Korean Broadcasting Commission stops communication firms from branching out, and the Ministry of Information and Communication stands in the way of cable TV firms diversifying.

Even if the two authorities were to change their minds, domestic consumers would not be able to benefit from lower prices -- the most attractive trait of TPS -- under the current system, since KT, the dominant operator in the fast Internet service market, needs permission from the ministry to lower service charges.

The world is moving towards Internet-based TV, and the nationality of the service provider is no longer an issue. Whether the program is received through cable TV or the fast Internet network is no longer significant. One expert in the field says if the government hinders the development of Korean companies through strict regulations, Korean consumers will simply turn to other sources like Yahoo, Google and Microsoft to get the entertainment they want. The “second Internet revolution” has already broken down the boundaries of broadcasting and communication, he says.

(englishnews@chosun.com )

Single Asian Currency Comes a Step Closer to Reality

The Asian single currency, which so far only exists in the minds of economists and officials with international organizations, will take on more concrete reality soon. The Asian Development Bank plans to publicize the Asian currency unit (ACU), a notional unit of exchange based on a "basket" or weighted average of currencies used in the 10 ASEAN member countries plus South Korea, China and Japan, the Yomiuri Shimbun and others reported Friday.

But that does not mean that any bills or coins will circulate any time soon. The ACU is only the first step toward the integration of Asian currencies, a “virtual currency” that takes into consideration GDP and trade volume of each of the 13 nations and serves as a gauge for governments to implement foreign exchange policies. So far, Japan and China have tried to make their own national currency into the Asian currency, but their jostling in effect cancelled out the efforts of the other. That is why the ACU is gaining support on the road to a single currency.

Given that it took more than 30 years for Europe to launch its single currency, the euro, Asians also probably have a long road ahead until the ACU or its successor chinks in their pockets. However, it could take less time than in Europe since internal trade volume in the region is increasing faster than external trade volume with the U.S. or Europe, according to Yun Deok-ryong, a researcher with the Korea Institute for International Economic Policy.

Still, many obstacles lie ahead. The U.S. above all is likely to worry that it will lose its influence over Asian economies and use the International Monetary Fund(IMF) to block the introduction of the ACU. The launch of the Asian Monetary Fund, which is to coordinate monetary policies in the region, faces objections from the U.S., which does not want to see an Asian single currency emerge as another key currency along with the dollar and euro in the global financial market, an official with the Ministry of Finance and Economy said.

(englishnews@chosun.com )

Sunday, January 08, 2006

Stiring-Cycle Engine Power By Solar Energy


Weird. a company called Stirling Energy Systems and their plans to build a huge power station based upon a Stirling solar array. And when I say huge, I mean 20,000 collectors spread over 4,500 acres. Unfathomably massive.

Each solar dish measures 37 feet across and is computer-controlled to track the sun. The thermal energy is focused on a collector which heats liquid hydrogen in a closed loop, just like the toy Stirling engine shown below. The engine's pistons drive an electric generator, and voila -- affordably electricity without pollution. They estimate a 1,000 MW array will be able to generate electricity at an incredibly affordable six cents per kWh.

Talk about a huge comeback for this classic technology.


A Depiction of a Dish Stirling On Sun



What is a Stirling Engine?

On September 27, 1816, Robert Stirling applied for a patent for his Economiser at the Chancery in Edinburgh, Scotland. By trade, Robert Stirling was actually a minister in the Church of Scotland and he continued to give services until he was eighty-six years old. But, in his spare time, he built heat engines in his home workshop. Lord Kelvin used one of the working models during some of his university classes.

In 1850 the simple and elegant dynamics of the engine were first explained by Professor McQuorne Rankine. Approximately one hundred years later, the term "Stirling engine" was coined by Rolf Meijer in order to describe all types of closed cycle regenerative gas engines.

Today, Stirling engines are used in some very specialized applications, like in submarines or auxiliary power generators, where quiet operation is important. Stirling engines are unique heat engines because their theoretical efficiency is nearly equal to their theoretical maximum efficiency, known as the Carnot Cycle efficiency. Stirling engines are powered by the expansion of a gas when heated, followed by the compression of the gas when cooled. The Stirling engine contains a fixed amount of gas which is transferred back and forth between a "cold" and and a "hot" end. The "displacer piston" moves the gas between the two ends and the "power piston " changes the internal volume as the gas expands and contracts.

The gasses used inside a Stirling engine never leave the engine. There are no exhaust valves that vent high-pressure gasses, as in a gasoline or diesel engine, and there are no explosions taking place. Because of this, Stirling engines are very quiet. The Stirling cycle uses an external heat source, which could be anything from gasoline to solar energy to the heat produced by decaying plants. No combustion takes place inside the cylinders of the engine.

The SES Solar Dish Stirling technology is well beyond the research and development stage, with more than 20 years of recorded operating history. The equipment is well characterized with over 25,000 hours of on-sun time. Since 1984, the Company's solar dish Stirling equipment has held the world's efficiency record for converting solar energy into grid-quality electricity. SES has teamed with the U.S. Department of Energy and Sun-Labs (NREL and Sandia National Laboratories) to endurance test and commercialize the SES solar Stirling system.

Friday, January 06, 2006

Stirling-Cycle Engine Power by Cold Fusion Reactor


This is a demonstration of a Stirling engine powered with Cold Fusion Reactor ( CFR ). The Stirling-cycle engine has been patented in 1816 by Robert Stirling, a Scottish engineer.

The Stirling-cycle engine runs on the expansion and contraction of a gas forced between separate hot and cold chambers.

The resulting change in volume is then used to drive a piston, which can then be used to power external devices. A stirling engine is very efficient.

A heat pump powered by a Stirling engine can do more work than a conventional heat pump. The Stirling engine can use any heat as a source for movement it only requires a heat source and a cold sink.


stirling engine..


Imagine a compact, quiet power plant that delivers some kilowatts of electricity powered by a Cold Fusion Reactor.

Let's say this power source is also virtually pollution-free, able to burn most fuels, and requires minimal maintenance.

I used a Stirling Engine model from the American Stirling Company, the model is the "Stirling Engine MM-1".

This experiment is very simple to conduct and anyone can do it.

1 - Description of the experiment :

The Cold Fusion Reactor is composed of a 700 mL glass vessel filled with 600 mL of a Potassium Carbonate ( K2CO3 ) solution at 0.2M.

The Cathode used is a pure tungsten rod ( W ) 2 mm diameter and 45 mm length from tungsten welding rods. The Anode used is composed of stainless steel mesh maintained with a stainless steel shaft. All the wires connections are made with a 1.5 mm2 copper flexible wire gained with silicon. ( see the photo below )

2 - Test results :

The CFR is preheated to 77°C and then the power supply is switched on. The Stirling engine turns quickly at up to 630 RPM by using the heat produced by the Cold Fusion Reactor.

Notes from Jean-Louis Naudin : This is a proof of concept experiment, its purpose is to demonstrate that a simple Stirling engine is able to run very well with the heat produced by the CFR.

Wednesday, January 04, 2006

Korea Boosts Submarine Project to Double Fleet


Korea has expanded a plan to build three 1,800-ton level 214 submarines starting in 2012 by another six in a bid to double the country's fleet by 2020, the armed forces said Wednesday.

Observers pricked up their ears at the choice of submarines over Aegis vessels as a key strategic weapon to counter any threat posed by powerful nations like China and Japan in the event of reunification with North Korea.

According to a statement from the Defense Acquisition Program Administration on Wednesday and other sources, the Navy and Joint Chiefs of Staff decided to launch three German-made 214-grade submarines by 2010 in the first phase, and start building six more subs starting in 2012, in addition to the nine Korea already has, a decision that reflects their fresh assessment of what will be needed in the mid- to long term. A source confirmed that the total number of submarines is to be upped to around 18.

On top of the 214-level subs, it has also recently become clear that the administration and Navy plan to acquire three stronger and bigger 3,000-ton submarines estimated to cost W3.7 trillion between 2010 and 2022.

Currently, the Navy has nine German-made Chanbogo class subs (Type 209) and three Dolphin class subs. When the Chanbogo class subs introduced in the 1990s are retired, they will be gradually replaced with the next-generation subs to maintain the total at 18.

The 214 class submarines with a 65 m hull and 1,800-ton displacement outstrip the Chanbogo subs, the Navy’s current core subs, in scale and performance. Thus they are capable of two weeks’ continuous operation at sea and are armed with up to 20 torpedoes, anti-ship missiles and mines. When equipped with ship-to-ground cruise missiles, they are capable of attacking strategic targets in both North Korea and neighboring countries.

Japan has 16 state-of-the-art submarines including eight 3,000-ton Oyashio class submarines, while China has a fleet of 60 including Han class offensive nuclear-powered submarines. The U.S. has as its mainstay 77 7,000-ton Los Angeles class submarines propelled by nuclear power, and North Korea has 22 1,700-ton Romio class subs.

(englishnews@chosun.com )

Samsung SCH-i830 Finally Unveiled at CES


READ MORE: CES, Cellphones, SCH-i830, Samsung, Smartphone, Verizon

We’ve been getting leaks about the SCH-i830 for quite a while now, but Samsung and Verizon have finally let the dogs out at CES and announced its availability as January 10 for Verizon Wireless’ large account enterprise customers and January 24, 2006 for consumers. The latest of Samsung’s smartphones, the i830 has a full QWERTY keyboard, access to email and documents, quad-band frequency, VZEmail including Wireless Sync as well as Windows Mobile and Pocket PC applications. And don’t forget Bluetooth, an SD I/O expansion slot and stereo speakers to round it out. Retails for about $600 from Verizon Wireless.

Sunday, January 01, 2006

Secret Cardiology - EECP

Secret Cardiology - EECP
Your Guide, Richard N. Fogoros, M.D. From Richard N. Fogoros, M.D.,

Your Guide to Heart Disease / Cardiology.
A useful treatment for angina your cardiologist doesn't want to hear about
Updated - December 5, 2005

Recent data documenting the effectiveness of Enhanced External Counterpulsation (EECP) for the treatment of angina has failed to bring this apparently effective procedure into the mainstream of cardiology practice. In this article, DrRich discusses what EECP is, how it works, and why cardiologists are avoiding this safe, noninvasive treatment like the plague.

What is EECP?
EECP is a mechanical procedure in which long inflatable cuffs (like blood pressure cuffs) are wrapped around both of the patient’s legs. While the patient lies on a bed, the leg cuffs are inflated and deflated with each heartbeat. This is accomplished by means of a computer, which triggers off the patient’s ECG so that the cuffs deflate just as each heartbeat begins, and inflate just as each heartbeat ends.

When the cuffs inflate they do so in a sequential fashion, so that the blood in the legs is “milked” upwards, toward the heart.

EECP has two potentially beneficial actions on the heart. First, the milking action of the leg cuffs increases the blood flow to the coronary arteries. (The coronary arteries, unlike other arteries in the body, receive their blood flow after each heartbeat instead of during each heartbeat. EECP, effectively, “pumps” blood into the coronary arteries.) Second, by its deflating action just as the heart begins to beat, EECP creates something like a sudden vacuum in the arteries, which reduces the work of the heart muscle in pumping blood into the arteries. Both of these actions have long been known to reduce cardiac ischemia (the lack of oxygen to the heart muscle) in patients with coronary artery disease. Indeed, an invasive procedure that does the same thing, intra-aortic counterpulsation (IACP, in which a balloon-tipped catheter is positioned in the aorta, which then inflates and deflates in time with the heartbeat), has been in widespread use in intensive care units for decades, and its effectiveness in stabilizing extremely unstable patients is well known.

While a primitive form of external counterpulsation has also been around for a long time, it has not been very effective until recently. Thanks to new computer technology that allows the perfect timing of the inflation and deflation of the cuffs, and produces the milking action, modern EECP has been greatly enhanced.

EECP is administered as a series of outpatient treatments. Patients receive 5 one-hour sessions per week, for 7 weeks (for a total of 35 sessions). The 35 one-hour sessions are aimed at provoking long lasting beneficial changes in the circulatory system.

How effective is it?

EECP now appears to be quite effective in treating chronic stable angina. A randomized trial with EECP, published in the Journal of the American College of Cardiologyin 1999, showed that EECP significantly improved both the symptoms of angina (a subjective measurement) and exercise tolerance (a more objective measurement) in patients with coronary artery disease. EECP also significantly improved “quality of life” measures, as compared to placebo therapy.

More recent data show that this improvement in symptoms following a course of EECP seems to persist for up to five years.

Furthermore, there is also preliminary data suggesting that EECP may be useful for treating unstable angina, as adjunctive therapy after revascularization (i.e., with angioplasty, stent, and/or bypass surgery), and even as first-line (instead of last resort) therapy for more routine forms of angina. (Read about EECP as early therapy for angina here.)

Finally, clinical trials have suggested that EECP may be useful in improving symptoms in patients with heart failure. Read about EECP for heart failure here.

How EECP works

How EECP works, and who it may help Who is likely to benefit from EECP?
Based on what is already known, EECP should be considered in anybody who still has angina despite maximal medical therapy and prior revascularization. No cardiologist could argue logically against this. And, frankly, if a patient insisted on trying EECP prior to agreeing to purely elective revascularization for chronic stable angina, the cardiologist might not like it, but would be hard pressed to give anything beyond a purely emotional reason as to why this should not be tried.

Why does EECP work?

The mechanism for the sustained benefits seen with EECP still amount to speculation. Everyone can agree that there are good reasons for EECP (just as for IACP) to benefit the heart while the therapy is actually taking place. But as to why the benefit of EECP persists even after the therapy is finished, no one can say for sure.

There is also evidence that EECP may act as a form of “passive” exercise, leading to the same sorts of persistent beneficial changes in the autonomic nervous system that are seen with real exercise.

Can EECP be harmful?

EECP can be somewhat uncomfortable (it is said to be more difficult to watch – what with the patient being noticeably jostled due to the milking action of the inflatable leg cuffs – than it is to actually have it done), but is not painful. In fact, it is apparently very well tolerated by the large majority of patients.

But not everyone can have it. People probably should not have EECP if they have certain types of valvular heart disease (especially aortic insufficiency), or if they have had a recent cardiac catheterization, an irregular heart rhythm, severe hypertension, significant blockages in the leg arteries, or a history of deep venous thrombosis (blood clots in the legs). For anyone else, however, the procedure appears to be quite safe.

Why your doctor hasn't told you about EECP ?

There are preliminary data suggesting that EECP can help induce the formation of collateral vessels in the coronary artery tree, by stimulating the release of nitric oxide and other growth factors in within the coronary arteries.
Then there’s the fact that EECP remains somewhat intellectually unsatisfying.

To your average cardiologist, there’s no reason at all that anyone should have thought it would work in the first place – that temporarily providing counterpulsation would have lasting effects. And the fact that it apparently does work is merely blind luck, and leaves investigators scrambling ridiculously to explain why it does. This is a less than satisfying way to advance science.

In addition, to most cardiologists, EECP is logistically difficult. To accommodate patients for EECP, they would not only have to purchase expensive equipment, but also would have to radically change the organization of their offices, their office staff, and their space.

Finally, and most importantly, EECP has nothing in common with what cardiologists do. Cardiologists study and treat the heart, for goodness sake. They stress it, image it, measure it, pace it, shock it, stent it, ablate it, revascularize it, and bathe it in drugs. What they do takes years of specialized training and expertise, millions of dollars of high-tech equipment, and tremendous manual dexterity, and it brings them significant prestige, even within the medical community.

Now they’re supposed to drop all that? In order to attach fancy balloons to peoples’ legs, throw a switch, watch them bounce around for an hour, then say, “See you tomorrow?” That’s not cardiology. That’s glorified physical therapy.

This, in DrRich’s estimation, is the real reason the average cardiologist is completely ignoring EECP, as if it doesn’t even exist. They simply can’t believe anyone really expects them to do this.

In any case, you may need to raise your cardiologist’s consciousness. If you have coronary artery disease that has proved difficult to treat, then you need to bring EECP up yourself.

Once enough patients show themselves to be aware of this new therapy and to be expecting it, suddenly EECP will no longer be beneath cardiologists, and they’ll eagerly find a way to incorporate it into their practices.

How can you receive EECP?
If you are a candidate for EECP and wish to pursue it, start with your doctor. If your doctor discourages you from pursuing EECP, make sure he/she gives you a good reason for discouraging it. Good reasons would include: you don’t have the sort of coronary artery disease or angina that would benefit from EECP; your coronary artery disease is of the type that requires revascularization; or you have one of the contraindications (listed above) for having EECP. (Good reasons would not include: it’s unproven; it doesn’t work; it’s voodoo; or I’ve never heard of it.)

There are fewer than 200 places today performing EECP, though the number is growing rapidly. If your doctor can’t think of a place to refer you for EECP, go online. The best place to start online would be EECP.com. This is a website run by Vasomedical, Inc., the company that makes the equipment for EECP, so it is not unbiased. But it does offer an excellent means of finding a place where you can get EECP in your area.

Your insurance carrier should cover EECP, though these fine humanitarians might well deny coverage initially. Medicare has approved EECP for reimbursement, and once Medicare approves a new treatment, insurance companies normally fall in line quite quickly. In the case of EECP, however, many insurance companies are still balking at paying, perhaps because their cardiology consultants are telling them it’s not really a serious therapy. Don’t let this discourage you. If you are turned down for reimbursement, appeal the decision. Most insurance companies count on patients failing to appeal (which is why they so frequently deny therapy that is obviously needed), and with Medicare supporting your contention that EECP ought to be covered, odds are that if you appeal you’ll win.