Главная | Обратная связь | Поможем написать вашу работу!
МегаЛекции

Text D. Ecosystems: How They Work




1. The environmental problems we face and the questionable long-term viability of our current human system are caused by our failure to adhere to basic ecological principles of sustainability. These principles may show us the direction we need to take. Let's look at our human system from the point of view of each of the principles of ecosystem sustainability.

2. First Principle of Sustainability: For sustainability ecosystems dispose of wastes and replenish nutrients by recycling all elements. In contrast to this principle, we have based our human system in large part on one-directional flow of elements. We mine elements in one location and dispose of them in another. For example, phosphate withdrawn from soils by agricultural crops comes to us with our food supplies, but then effluents of our wastes containing the phosphate are discharged into various waterways (rivers, lakes, bays and estuaries) rather than back into the soil. To make up for the removal of phosphate from soil, phosphate rock is mined at various locations and added to soil as a constituent of fertilizer. Thus, there is basically a one-way flow of phosphate from mine to waterways. The same can be said for such metals as aluminum, mercury, lead, and cadmium, which are the "nutrients" of our industry. We have created a flow of these elements from natural deposits through our systems to dumps and landfills.

3. This one-way flow leads to two problems: depletion of the resource at one end and pollution at the other. Pollution has proved to be, by far, the more severe problem. Countless waterways around the world and even sections of the ocean are suffering severe ecological disturbances from being oversupplied with nutrients such as phosphate. This problem is known as eutrophication. Likewise, many rivers and other bodies of water are contaminated with toxic elements from various discharges. For example, thousands of kilometers of tributaries of the Amazon are badly contaminated with mercury, a waste product of gold mining. Putting such waste materials into dumps is problematic on two counts. Finding space for dumps and landfills is reaching crisis proportions in many regions. Then, even when such toxic materials are put into dumps, they tend to leak out causing pollution of both ground and surface water.

4. Aggravating the problem is the fact that we produce and use thousands of products, such as plastics, that are synthetic organic compounds that are non-biodegradable. That is, detritus feeders and decomposers are unable to attack and break them down. Thus, enormous amounts of non-biodegradable products compound the problem of finding dump sites. Also, many such synthetic products are toxic and cause pollution in the same way lead and other elements do. The rapid development of recycling programs in the last few years is an encouraging sign that we are beginning to recognize and implement the first principle of sustainability.

5. Second Principle of Sustainability: For sustainability ecosystems use sunlight as their source of energy. In contrast to this principle, our fantastic technological and material progress of the past 200 years has been in large part a story of developing machinery, engines, and heating plants that run on fossil fuels – coal, natural gas, and crude oil. Just consider that virtually all cars, trucks, aircraft, and other vehicles run on fuels refined from crude oil; 70 percent of the electricity in our country comes from coal-fired power plants, and most homes, buildings, and hot water are heated with natural gas. Even food production, which is basically derived from solar energy (photosynthesis of crop plants), is heavily supported by fossil fuels used in farm machinery, production of fertilizer and pesticides, transportation, processing and canning, refrigeration, and finally cooking. In all, more than 10 calories of fossil fuels are consumed for every calorie of food that is served in the United States.

6. From meager beginnings in the late 1800s, oil consumption now tops 50 million barrels per day worldwide. The byproducts of burning fossil fuels enter the atmosphere and are directly responsible for our most severe air pollution problems – urban smog, acid rain, and most recently, the potential of global warming. Also, we are facing increasingly severe crises because of depletion of present oil reserves and environmental destruction in the effort to find more. Nuclear power is being promoted as an alternative, but this source also seems dubious because of the hazards of its radioactive waste products.

7. Thus, the danger in continuing to ignore the ecosystem principle regarding solar energy seems clear. In addition to being nonpolluting and nondepletable, solar energy is also extremely abundant. Green plants, including agricultural crops utilize a very small fraction of the solar energy that hits the earth. Most of the rest is converted directly to heat as it is absorbed by water or land. In turn, this heated water and land heats the air and causes the evaporation of water. Thus, solar energy is the major driving force behind ocean currents, wind, and rain – i.e., weather. There is ample opportunity to harness some of this energy and put it to work. According to the Laws of Thermodynamics, the final heat at the end of the line is the same whether the energy is harnessed to perform useful work along the way or not. Therefore, even using solar energy on a vast scale would not change the overall dynamics of the biosphere.

8. Third Principle of Sustainability: For sustainability the size of consumer populations is maintained such that overgrazing does not occur. In contrast to this principle, the human population has increased more than five-fold in the past 200 years. It has nearly tripled in just the last 60 years and is continuing to increase at a rate of over 90 million people per year. It can be argued that this ever-accelerating growth rate is irrelevant because humans are supported by a technological agricultural system, not a natural ecosystem. On the other hand, signs of overgrazing are becoming all too evident. First, there is literal overgrazing. Over the world millions of acres of productive grasslands have been badly degraded or even turned into desert because of overgrazing cattle.

9. Then, there are any number of examples of overgrazing in figurative sense. Consider the destruction of tropical and other forests; depletion of groundwater supplies; farming practices that are leading to deterioration of soil and hence of productivity; poor people in a number of less developed countries picking hillsides bare in their guest for firewood, which is their only fuel; depletion of fishing areas, and so on. Perhaps the most serious form of figurative overgrazing, however, may be the ever-expanding human development and exploitation that displace and degrade natural ecosystems and consequently cause the extinction of countless species. The effects that this extinction may have will be discussed further. The principle of maintaining a stable (nongrowing) population is a principle that cannot be ignored.

Part II. FACTS FROM THE HISTORY OF SCIENCE
AND ENGINEERING

Text A. Period I (1900-1945)

1. The decisive events of the first period have been the conception of the Theory of Relativity and that of Quantum Mechanics. Rarely in the history of science have two complexes of ideas so fundamentally influenced natural science in general.

2. There are important differences between the two achievements. Relativity theory should be regarded as the crowning of classical physics of the eighteenth and nineteenth centuries. The special theory of relativity brought about a unification of mechanics and electromagnetism. These two fields were inconsistent with each other, when dealing with fast-moving electrically charged objects. Of course, relativity created new notions, such as the relativity of simultaneity, the famous mass-energy relation, the idea that gravity can be described as a curvature of space. But, altogether, the theory of relativity uses the concepts of classical physics, such as position, velocity, energy, momentum, etc. Therefore it must be regarded as a conservative theory, establishing a logically coherent system within the edifice of classical physics.

3. Quantum mechanics was truly revolutionary. It is based on the recognition that the classical concepts do not fit the atomic and molecular world: a new way to deal with that world was created. Limits were set to the applicability of classical concepts by Heisenberg’s uncertainty relations. They say «down to here and no further can you apply classical concepts». This is why it would have been better to call them «Limiting Relations». It would also have been advantageous to call relativity theory «Absolute Theory», since it describes the laws of Nature independently of the systems of reference. Much philosophical abuse would have been avoided.

4. It took a quarter of a century to develop non-relativistic Quantum Mechanics. Once conceived, an explosive development occurred. Within a few years most atomic and molecular phenomena could be understood, at least in principle. It is appropriate to quote a slightly altered version of a statement by Churchill praising the Royal Air Force: «Never have so few done so much in so short a time».

5. A few years later, the combination of relativity and quantum mechanics yielded new unexpected results. P.A.M. Dirac conceived his rel-ativistic wave equation which contained the electron spin and the fine structure of spectral lines as a natural consequence. The application of quantum mechanics to the electromagnetic field gave rise to Quantum Electrodynamics with quite a number of surprising consequences, some of them positive, others negative.

6. The positive ones included Dirac’s prediction of the existence of an antiparticle to the electron, the positron, which was found afterwards in 1932 by CD. Anderson and S.H. Nedermeyer. Most surprising were the predictions of the creation of particle – antiparticle pairs by radiation or other forms of energy and the annihilation of such pairs with the emission of light or other energy carriers. Another prediction was the existence of an electric polarization of the vacuum in strong fields. All these new processes were found experimentally later on.

7. The negative ones are consequences of the infinite number of degrees of freedom in the radiation field. Infinities appeared in the coupling of an electron with its field and in the vacuum polarization when the contribution of high-frequency fields is included. These infinities cast a shadow on quantum electrodynamics until 1946 when a way out was found by the so-called renormalization method.

8. Parallel to the events in physics during Period I, chemistry, biology, and geology also developed at a rapid pace. The quantum mechanical explanation of the chemical bond gave rise to quantum chemistry that allowed a much deeper understanding of the structure and properties of molecules and of chemical reactions. Biochemistry became a growing branch of chemistry. Genetics was established as a branch of biology, recognizing the chromosomes as carriers of genes, the elements of inheritance. Proteins were identified as essential components of living systems. The knowledge of enzymes, hormones, and vitamins vastly increased during that period. Embryology began to investigate the early development of living systems: how the cellular environment regulates the genetic program. Darwin’s idea of evolution was considered in greater detail, recognizing the lack of inheritance of acquired properties. A kind of revolution was also started in geology by A. Wegener’s concept of plate tectonics and continental drift. W. Elsasser’s suggestion of eddy currents in the liquid-iron core of the Earth as the source of the Earth’s magnetism was published at the end of Period I, and led to the solution of a hitherto unexplained phenomenon.

9. The year 1932 was a miracle year in physics. The neutron was discovered by J. Chadwick, the positron was found by Anderson and Neddermeyer, a theory of radioactive decay was formulated by E. Fermi in analogy with quantum electrodynamics, and heavy water was discovered by H. Urey. The discovery of the neutron initiated nuclear physics; the atomic nucleus was regarded as a system of strongly interacting protons and neutrons. This interaction is a consequence of a new kind of force, the «nuclear force», besides the electromagnetic and gravity forces, and the «weak force» that Fermi introduced in his theory of radioactivity. Nuclear physics in the 1930’s was a repeat performance of atomic quantum mechanics albeit on a much higher energy level, about a million times the energies in atoms, and based on a different interaction. It led to an understanding of the principles of nuclear spectroscopy and of nuclear reactions. Artificial radioactivity, and later nuclear fission and fusion were discovered with fateful consequences of their military applications. One of the most important insights of nuclear physics in Period T was the explanation of the sources of solar and stellar energy by fusion reactions in the interior of stars.

10. What is most striking was the small number of experimental and theoretical physicists who dealt with the new developments. The yearly Copenhagen Conferences, devoted to the latest progress in quantum mechanics and relativity, were attended by not more than fifty or sixty people. There was no division into specialities. Atomic and molecular physics, nuclear physics, condensed matter, astronomy, and cosmology were discussed and followed up by all participants. In general, everybody present was interested in all subjects and their problems. Quantum mechanics was regarded as an esoteric field; practical applications were barely mentioned.

11. Most characteristic of pre-World-War II science were small groups and low costs of research, primarily funded by universities or by foundations and rarely by government sources. Foundations had a great influence on science. Some of the impressive developments of the thirties in biology can be traced to the decision of the Rockefeller Foundation under Warren Weaver to support biology more than other sciences.


Text B. Period II (1946-1970)

1. The time from 1946 to about 1970 was a most remarkable period for all sciences. The happenings of World War II had a great influence, especially on physics. Physicists became successful engineers in some large military research and development enterprises, such as the Radiation Laboratory at MIT, the Manhattan Project, the design of the proximity fuse, to the astonishment of government officials. Scientists who previously were mainly interested in basic physics, conceived and constructed the nuclear bomb under the leadership of one of the most «esoteric» personalities J.R. Oppenheimer, E. Fermi constructed the first nuclear pile, E. Wigner was instrumental in designing the reactors that produced plutonium, J. Schwinger developed a theory of waveguides, essential for radar. It was more than that: some of these people were excellent organizers of large-scale research and development projects having good relations with industry, such as the aforementioned military projects.

2. The progress of natural science in the three decades after the war was outstanding. Science acquired a new face. It would be impossible in the frame of this essay to list all the significant advances. We must restrict ourselves to an account of a few of the most striking ones without mentioning the names of the authors. The choice is arbitrary and influenced by my restricted knowledge. In quantum field theory: the invention of the renormalization method in order to avoid the infinities of field theory that made it possible to extend calculations to any desired degree of accuracy. In particle physics: the recognition of the quark structure of hadrons establishing order in their excited states, the existence of unstable heavy electrons and of several types of neutrinos (two were discovered in Period II, the third in the next period), the discovery of parity violation in weak interactions, and the unification of electromagnetic and weak forces as components of one common force field. In nuclear physics: the nuclear shell model, an extensive and detailed theory of nuclear reactions, and the discovery and analysis of rotational and collective states in nuclei. In atomic physics: the Lamb shift, a tiny displacement of spectral lines which could be explained by the new quantum electrodynamics, the maser and the laser with its vast applications, optical pumping, and non-linear optics. In condensed matter physics: the development of semiconductors and transistors, the explanation of superconductivity, surface properties, and new insights into phase transitions and the study of disordered systems. In astronomy and cosmology: the Big Bang and its consequences for the first three minutes of the Universe, the galaxy clusters and the 3° radiation as the optical reverberation of the Big Bang, and the discovery of quasars and pulsars. In chemistry: the synthesis of complex organic molecules, the determination of the structure of very large molecules with physical methods such as X-ray spectroscopy and nuclear magnetic resonance, the study of reaction mechanisms using molecular beams and lasers. In biology: the emergence of molecular biology as a fusion of genetics and biochemistry, the identification of DNA as carrier of genetic information followed by the discovery of its double helical structure, the decipherment of the genetic code, the process of protein synthesis, the detailed structure of a cell with its cellular organelles, the study of sensory physiology investigating orientation of homing birds and fish. In geology: the development and refinement of plate tectonics using newly available precision instruments, and the discovery of ocean floor spreading by means of sonar and other electronic devices.

3. Many of the new results and discoveries were based upon the instrumental advances in the field of electronics and nuclear physics due to war research. One of the most important new tools decisive for all sciences was the computer. The development and improvements of this tool are perhaps the fastest that ever happened in technology.

4. Important changes in the social structure of science took place, especially in particle physics, nuclear physics, and astronomy. The rapid developments in these fields required larger and more complex accelerators, rockets and satellites in space, sophisticated detectors, and more complex computers. The government funding was ample enough to provide the means for such instruments. The size and complexity of the new facilities required large teams of scientists, engineers, and technicians, to exploit them. Teams of up to sixty members were organized, especially in particle physics. (In Period III the sizes of teams reached several hundred.) Other branches of science, such as atomic and condensed matter physics, chemistry and biology, did not need such large groups; these fields could continue their research more or less in the old-fashioned way in small groups at a table top with a few exceptions, for example, in the biomedical field, where larger teams are sometimes necessary.

5. The large teams brought about a new sociology. A team leader was needed who had the responsibility not only for intellectual leadership, but also for the organization of subgroups with specific tasks, and for financial support. A new type of personality appeared in the scientific community with character traits quite different from the scientific leaders of the past. The participation in these large teams of many young people, graduate students and post-graduates, creates certain problems. It is hard for them to get recognition for their work, since their contributions get lost in the overall effort of the team. In order to attract young researchers to join big teams, the subgroups must have some independent initiative for well-defined tasks, so that the performers of these tasks can claim credit for their work.

6. The development of huge research enterprises caused a split in the character of science into «small» science and «big» science. Small science consists of all those fields that can be studied with small groups at relatively small cost, whereas big science is found in particle physics, in some parts of nuclear physics and astronomy, in space exploration, and in plasma physics. There is also big science in condensed matter physics and in biology: the use of synchrotron radiation in the former and the human genome project in the latter. Big science needs large financial support, so that the question of justification plays a decisive role.

Text C. Period III (from 1970 to the end of the 20th century)

1. Basic and applied science are interwoven; they are like a tree whose roots correspond to basic science. If the roots are cut, the tree will degenerate.

2. Another intellectual value is the role that basic science plays in the education of young scientists. It fosters a kind of attitude that will be most productive in whatever work the students will finally end up with. Experience has shown that training in basic science often produces the best candidates for applied work. Basic science also has ethical values. It fosters a critical spirit, a readiness to admit «I was wrong», an anti-dogma attitude that considers all scientific results as tentative, open for improvements or even negation by future developments. It also engenders a closer familiarity with Nature and a deeper understanding of our position and role in the world nearby and far away.

3. Much too little effort is devoted by scientists to explaining simply and impressively the beauty, depth, and significance of basic science, not only its newest achievements, but also the great insights of the past. This should be done in books, magazine articles, television programmes, and in school education. The view should be counteracted that science is materialistic and destroys ethical value systems, such as religion. On the contrary, the ethical values of science should be emphasized. Finally, it would help to point out the positive achievements of applied science, the contribution to a higher standard of living, and the necessity of more science to solve environmental problems.

4. It looks as if we are facing a more pragmatic era, concentrating on applied science. Perhaps the end is nearing of the era of one hundred years full of basic discoveries and insights under the impact of the Theory of Relativity and that of Quantum Mechanics. Even so, we will always need basic research based on the urge to understand more about Nature and ourselves.

Text D. Lasers

The story of the laser, a device that produces a powerful beam of very pure light able to slice through metal and pierce diamond, began when physicists were unravelling the secrets of the atom.

In 1913 the Danish physicist Niels Bohr pointed out that atoms can exist in a series of states and each state has a certain energy level. Atoms cannot exist between these states but must jump from one to another. An atom at a low-energy level can absorb energy to reach a high-energy level. When it changes from a high to a low-energy level, it gives out the surplus energy in the form of radiation. If the radiation is given in the form of visible light, the light will all be of the same wavelength (that is, colour). The atom at a high-energy level may emit this radiation spontaneously. Or, as the German-born physicist Albert Einstein pointed out in 1917, it may be triggered into doing so by other radiation. It is on this latter process, called the stimulated emission of radiation, that the laser depends.

Stimulated emission was not thought useful until the early 1950s, when the physicists C.H. Townes in the United States and N.G. Basov and A.M. Prokhorov in Russia suggested how it could be used to amplify microwaves – electro-magnetic radiation with very short wave-lengths outside the visible spectrum – and used, for example, in radar.

In 1953 Townes built the first device to amplify microwaves using stimulated emission. He used ammonia gas as the source of high-energy (or «excited») atoms. Later it was found that a ruby crystal could be used as well. The device became known as the maser, from the initials of «Microwave Amplification by Stimulated Emission of Radiation». For their pioneering work on masers Townes, Basov and Prokhorov were jointly awarded the 1964 Nobel Prize for physics.

In 1958 Townes and his brother-in-law, Arthur Schawlow, outlined a design for an optical maser – that is one producing visible light rather than microwaves. This idea gave birth to the laser – «Light Amplification by Stimulated Emission of Radiation».

Two years later the American physicist Т.Н. Maiman built the first laser, using a cylindrical rod of artificial ruby whose ends had been cut and polished to be exactly flat and parallel. It produced brief, penetrates pulses of pure red light with 10 million times the intensity of sunlight. The pulsed ruby laser is still the most powerful type of laser.

The emergent laser beam differs from an ordinary light beam in several respects. Whereas ordinary light is made up of several wave» lengths (colours), the laser light consists of a single wavelength. And whereas ordinary light spreads out from its source in all directions, a laser beam is almost perfectly parallel.

The ruby laser was followed, also in 1960, by a gas laser, developed by D.R. Herriott, A. Javan and W.R. Bennett at Bell Telephone Lab» oratories in the United States. Gas lasers are not as powerful as ruby lasers but emit a continuous beam that can be left on like a torch, in contrast to the ruby laser which emits its light in very short pulses.

The purity of wavelength and straight-line beam of lasers have many applications. In industry the heat of the beam is used for cutting, boring and welding. In tunnelling, lasers guide the boring machines on a perfectly straight line; the laser beam remains accurately focused over long distances. Even after travelling a quarter of a million miles from the earth to the moon, a laser beam would have spread only a few miles.

Using the laser in a way similar to radar – sending out a light pulse and timing when its reflection («echo») returns – provides a very accurate method of distance measurement in space as well as on earth. By this means the distance to the moon at any time can be calculated to the nearest foot. Lasers are used in telecommunications by FIBRE OPTICS, and create three-dimensional photographic images in HOLOGRAPHY.

In medicine, lasers are used in eye surgery to weld back in place a detached retina – the light-sensitive screen at the rear of the eye-ball. The heat of a ruby laser pulse causes a «burn» which, in healing, develops scar tissue that mends the tear. Lasers can be used to treat glaucoma, a condition in which pressure builds up in the eye-ball. The laser punches a tiny hole in the iris to relieve the pressure, the patient feeling no more than a pinprick. Laser scalpels are also coming into use. They make a fine incision and at the same time cauterise (heat seal) the blood vessels, reducing bleeding.

Lasers are applied in art as well. It is possible to mention the famous concert with laser effects of J.M. Jarre near Egyptian pyramids at the beginning of the 3rd millennium.

Text E. Holography

A holographic image is a three-dimensional photograph of an object; but unlike a photograph made by a camera, it is seen as a ghostly image in space behind or in front of a photographic plate. On the plate is a hologram – a pattern of light and dark areas formed by beams of laser light. When pure light such as that from a laser is shone through the developed plate, the observer sees an exact three-dimensional image of the object beyond the plate. As the observer moves round the image, it changes its aspect as the object would have done. Using a curved plate, the top and bottom of an object can also be seen. In a development of holographic technique, it is possible to create an image that appears between the observer and the plate.

Holography became practical after the laser, a source of sufficiently pure light, was invented in 1960. It was developed in 1963 by two University of Michigan scientists, Emmett Leith and Juris Upatnieks. Holography is used in industrial research to make three-dimensional pictures of rapidly moving objects such as turbine blades.

Поделиться:





Воспользуйтесь поиском по сайту:



©2015 - 2024 megalektsii.ru Все авторские права принадлежат авторам лекционных материалов. Обратная связь с нами...