Pages

Showing posts with label thermodynamics. Show all posts
Showing posts with label thermodynamics. Show all posts

Saturday, 6 October 2012

The Symmetry Incarnations - Part I

Symmetry in nature is a sign of unperturbedness. It means nothing has interfered with a natural process, and that its effects at each step are simply scaled-up or scaled-down versions of each other. For this reason, symmetry is aesthetically pleasing, and often beautiful. Consider, for instance, faces. Symmetry of facial features about the central vertical axis is often translated as the face being beautiful - not just by humans but also monkeys.

However, this is just an example of one of the many forms of symmetry's manifestation. When it involves geometric features, it's a case of geometrical symmetry. When a process occurs similarly both forward and backward in time, it is temporal symmetry. If two entities that don't seem geometrically congruent at first sight rotate, move or scale with similar effects on their forms, it is transformational symmetry. A similar definition applies to all theoretical models, musical progressions, knowledge, and many other fields besides.

Symmetry-breaking

One of the first (postulated) instances of symmetry is said to have occurred during the Big Bang, when the observable universe was born. A sea of particles was perturbed 13.75 billion years ago by a high-temperature event, setting up anecdotal ripples in their system, eventually breaking their distribution in such a way that some particles got mass, some charge, some a spin, some all of them, and some none of them. In physics, this event is called spontaneous, or electroweak, symmetry-breaking. Because of the asymmetric properties of the resultant particles, matter as we know it was conceived.



Many invented scientific systems exhibit symmetry in that they allow for the conception of symmetry in the things they make possible. A good example is mathematics - yes, mathematics! On the real-number line, 0 marks the median. On either sides of 0, 1 and -1 are equidistant from 0, 5,000 and -5,000 are equidistant from 0; possibly, ∞ and -∞ are equidistant from 0. Numerically speaking, 1 marks the same amount of something that -1 marks on the other side of 0. Not just that, but also characterless functions built on this system also behave symmetrically on either sides of 0.



To many people, symmetry evokes an image of an object that, when cut in half along a specific axis, results in two objects that are mirror-images of each other. Cover one side of your face and place the other side against a mirror, and what a person hopes to see is the other side of the face - despite it being a reflection (interestingly, this technique was used by neuroscientist V.S. Ramachandran to "cure" the pain of amputees when they tried to move a limb that wasn't there). Like this, there are symmetric tables, chairs, bottles, houses, trees (although uncommon), basic geometric shapes, etc.

[caption id="attachment_24154" align="aligncenter" width="480"] A demonstration of V.S. Ramachandran's mirror-technique[/caption]

Natural symmetry

Symmetry at its best, however, is observed in nature. Consider germination: when a seed grows into a small plant and then into a tree, the seed doesn't experiment with designs. The plant is not designed differently from the small tree, and the small tree is not designed differently from the big tree. If a leaf is given to sprout from the mineral-richest node on the stem then it will; if a branch is given to sprout from the mineral-richest node on the trunk then it will. So, is mineral-deposition in the arbor symmetric? It should be if their transportation out of the soil and into the tree is radially symmetric. And so forth...



At times, repeated gusts of wind may push the tree to lean one way or another, shadowing the leaves from against the force and keeping them form shedding off. The symmetry is then broken, but no matter. The sprouting of branches from branches, and branches from those branches, and leaves from those branches all follow the same pattern. This tendency to display an internal symmetry is characterized as fractalization. A well-known example of a fractal geometry is the Mandelbrot set, shown below.



If you want to interact with a Mandelbrot set, check out this magnificent visualization by Paul Neave. You can keep zooming in, but at each step, you'll only see more and more Mandelbrot sets. Unfortunately, this set is one of a few exceptional sets that are geometric fractals.

Meta-geometry & Mulliken symbols

Now, it seems like geometric symmetry is the most ubiquitous and accessible example to us. Let's take it one step further and look at the "meta-geometry" at play when one symmetrical shape is given an extra dimension. For instance, a circle exists in two dimensions; its three-dimensional correspondent is the sphere. Through such an up-scaling, we're ensuring that all the properties of a circle in two dimensions stay intact in three dimensions, and then we're observing what the three-dimensional shape is.

A circle, thus, becomes a sphere; a square becomes a cube; a triangle becomes a tetrahedron (For those interested in higher-order geometry, the tesseract, or hypercube, may be of special interest!). In each case, the 3D shape is said to have been generated by a 2D shape, and each 2D shape is said to be the degenerate of the 3D shape. Further, such a relationship holds between corresponding shapes across many dimensions, with doubly and triply degenerate surfaces also having been defined.

[caption id="attachment_24152" align="aligncenter" width="190"] The tesseract (a.k.a. hypercube)[/caption]

Obviously, there are different kinds of degeneracy, 10 of which the physicist Robert S. Mulliken identified and laid out. These symbols are important because each one defines a degree of freedom that nature possesses while creating entities, and this includes symmetrical entities as well. In other words, if a natural phenomenon is symmetrical in n dimensions, then the only way it can be symmetrical in n+1 dimensions also is by transforming through one or many of the degrees of freedom defined by Mulliken.

[caption id="attachment_24151" align="aligncenter" width="162"] Robert S. Mulliken (1896-1986)[/caption]

Apart from regulating the perpetuation of symmetry across dimensions, the Mulliken symbols also hint at nature wanting to keep things simple and straightforward. The symbols don't behave differently for processes moving in different directions, through different dimensions, in different time-periods or in the presence of other objects, etc. The preservation of symmetry by nature is not a coincidental design; rather, it's very well-defined.

Anastomosis

Now, if that's the case - if symmetry is held desirable by nature, if it is not a haphazard occurrence but one that is well orchestrated if given a chance to be - why don't we see symmetry everywhere? Why is natural symmetry broken? Is all of the asymmetry that we're seeing today the consequence of that electro-weak symmetry-breaking phenomenon? It can't be because natural symmetry is still prevalent. Is it then implied that what symmetry we're observing today exists in the "loopholes" of that symmetry-breaking? Or is it all part of the natural order of things, a restoration of past capabilities?

[caption id="attachment_24149" align="aligncenter" width="640"] One of the earliest symptoms of symmetry-breaking was the appearance of the Higgs mechanism, which gave mass to some particles but not some others. The hunt for it's residual particle, the Higgs boson, was spearheaded by the Large Hadron Collider (LHC) at CERN.[/caption]

The last point - of natural order - is allegorical with, as well as is exemplified by, a geological process called anastomosis. This property, commonly of quartz crystals in metamorphic regions of Earth's crust, allows for mineral veins to form that lead to shearing stresses between layers of rock, resulting in fracturing and faulting. Philosophically speaking, geological anastomosis allows for the displacement of materials from one location and their deposition in another, thereby offsetting large-scale symmetry in favor of the prosperity of microstructures.

Anastomosis, in a general context, is defined as the splitting of a stream of anything only to rejoin sometime later. It sounds really simple but it is a phenomenon exceedingly versatile, if only because it happens in a variety of environments and for an equally large variety of purposes. For example, consider Gilbreath's conjecture. It states that each series of prime numbers to which the forward difference operator has been applied always starts with 1. To illustrate:

2 3 5 7 11 13 17 19 23 29 ... (prime numbers)

Applying the operator once: 1 2 2 4 2 4 2 4 6 ... (successive differences between numbers)
Applying the operator twice: 1 0 2 2 2 2 2 2 ...
Applying the operator thrice: 1 2 0 0 0 0 0 ...
Applying the operator for the fourth time: 1 2 0 0 0 0 0 ...

And so forth.

If each line of numbers were to be plotted on a graph, moving upwards each time the operator is applied, then a pattern for the zeros emerges, shown below.


This pattern is called that of the stunted trees, as if it were a forest populated by growing trees with clearings that are always well-bounded triangles. The numbers from one sequence to the next are anastomosing, only to come close together after every five lines! Another example is the vein skeleton on a hydrangea leaf. Both the stunted trees and the hydrangea veins patterns can be simulated using the rule-90 simple cellular automaton that uses the exclusive-or (XOR) function.


Nambu-Goldstone bosons

Now, what does this have to do with symmetry, you ask? While anastomosis may not have a direct relation with symmetry and only a tenuous one with fractals, its presence indicates a source of perturbation in the system. Why else would the streamlined flow of something split off and then have the tributaries unify, unless possibly to reach out to richer lands? Either way, anastomosis is a sign of the system acquiring a new degree of freedom. By splitting a stream with x degrees of freedom into two new streams each with x degrees of freedom, there are now more avenues through which change can occur.

[caption id="attachment_24138" align="aligncenter" width="365"] Water entrainment in an estuary is an example of a natural asymptote or, in other words, a system's "yearning" for symmetry[/caption]

Particle physics simplies this scenario by assigning all forces and amounts of energy a particle. Thus, a force is said to be acting when a force-carrying particle is being exchanged between two bodies. Since each degree of freedom also implies a new force acting on the system, it wins itself a particle, actually a class of particles called the Nambu-Goldstone (NG) bosons. Named for Yoichiro Nambu and Jeffrey Goldstone, the particle's existence's hypothesizers, the presence of n NG bosons in a system means that, broadly speaking, the system has n degrees of freedom.

[caption id="attachment_24137" align="aligncenter" width="275"] Jeffrey Goldstone (L) & Yoichiro Nambu[/caption]

How and when an NG boson is introduced into a system is not yet a well-understood phenomenon theoretically, let alone experimentally! In fact, it was only recently that a mathematical model was developed by a theoretical physicist at UCal-Berkeley, Haruki Watanabe, capable of predicting how many degrees of freedom a complex system could have given the presence of a certain number of NG bosons. However, at the most basic level, it is understood that when symmetry breaks, an NG boson is born!

The asymmetry of symmetry

In other words, when asymmetry is introduced in a system, so is a degree of freedom. This seems only intuitive. At the same time, you'd think the axiom is also true: that when an asymmetric system is made symmetric, it loses a degree of freedom - but is this always true? I don't think so because, then, it would violate the third law of thermodynamics (specifically, the Lewis-Randall version of its statement). Therefore, there is an inherent irreversibility, an asymmetry of the system itself: it works fine one way, it doesn't work fine another - just like the split-off streams, but this time, being unable to reunify properly. Of course, there is the possibility of partial unification: in the case of the hydrangea leaf, symmetry is not restored upon anastomosis but there is, evidently, an asymptotic attempt.

[caption id="attachment_24136" align="aligncenter" width="373"] Each piece of a broken mirror-glass reflects an object entirely, shedding all pretensions of continuity. The most intriguing mathematical analogue of this phenomenon is the Banach-Tarski paradox, which, simply put, takes symmetry to another level.[/caption]

However, it is possible that in some special frames, such as in outer space, where the influence of gravitational forces is weak if not entirely absent, the restoration of symmetry may be complete. Even though the third law of thermodynamics is still applicable here, it comes into effect only with the transfer of energy into or out of the system. In the absence of gravity (and, thus, friction), and other retarding factors, such as distribution of minerals in the soil for acquisition, etc., symmetry may be broken and reestablished without any transfer of energy.



The simplest example of this is of a water droplet floating around. If a small globule of water breaks away from a bigger one, the bigger one becomes spherical quickly; when the seditious droplet joins with another globule, that globule also reestablishes its spherical shape. Thermodynamically speaking, there is mass transfer, but at (almost) 100% efficiency, resulting in no additional degrees of freedom. Also, the force at play that establishes sphericality is surface tension, through which a water body seeks to occupy the shape that has the lowest volume for the correspondingly highest surface area (notice how the shape is incidentally also the one with the most axes of symmetry, or, put another way, no redundant degrees of freedom? Creating such spheres is hard!).

A godless, omnipotent impetus

Perhaps the explanation of the roles symmetry assumes seems regressive: every consequence of it is no consequence but itself all over again (self-symmetry - there, it happened again). This only seems like a natural consequence of anything that is... well, naturally conceived. Why would nature deviate from itself? Nature, it seems, isn't a deity in that it doesn't create. It only recreates itself with different resources, lending itself and its characteristics to different forms.



A mountain will be a mountain to its smallest constituents, and an electron will be an electron no matter how many of them you bring together at a location. But put together mountains and you have ranges, sub-surface tectonic consequences, a reshaping of volcanic activity because of changes in the crust's thickness, and a long-lasting alteration of wind and irrigation patterns. Bring together a unusual number of electrons to make up a high-density charge, and you have a high-temperature, high-voltage volume from which violent, permeating discharges of particles could occur - i.e., lightning. Why should stars, music, light, radioactivity, politics, manufacturing or knowledge be any different?

With this concludes the introduction to symmetry. Yes, there is more, much more...

[caption id="attachment_24131" align="aligncenter" width="560"] xkcd #849[/caption]

Saturday, 4 August 2012

Graphene the Ubiquitous

Every once in a while, a (revolutionary-in-hindsight) scientific discovery is made that's at first treated as an anomaly, and then verified. Once established as a credible find, it goes through a period where it is subject to great curiosity and intriguing reality checks - whether it was a one-time thing, if it can actually be reproduced under different circumstances at different locations, if it has properties that can be tracked through different electrical, mechanical and chemical circumstances.

After surviving such tests, the once-discovery then enters a period of dormancy: while researchers look for ways to apply their find's properties to solve real-world problems, science must go on and it does. What starts as a gentle trickle of academic papers soon cascades into a shower, and suddenly, one finds an explosion of interest on the subject against a background of "old" research. Everybody starts to recognize the find's importance and realize its impending ubiquity - inside laboratories as well as outside. Eventually, this accumulating interest and the growing conviction of the possibility of a better, "enhanced" world of engineering drives investment, first private, then public, then more private again.

Enter graphene. Personally, I am very excited by graphene as such because of its extremely simple structure: it's a planar arrangement of carbon atoms a layer thick positioned in a honeycomb lattice. That's it; however, the wonderful capabilities that it has stacked up in the eye of engineers and physicists worldwide since 2004, the year of it's experimental discovery, is mind-blowing. In the fields of electronics, mensuration, superconductivity, biochemistry, and condensed-matter physics, the attention it currently draws is a historic high.

Graphene's star-power, so to speak, lies in its electronic and crystalline quality. More than 70 years ago, the physicist Lev Landau had argued that lower-dimensional crystal lattices, such as that of graphene, are thermodynamically unstable: at some fixed temperature, the distances through which the energetic atoms vibrated would cross the length of the interatomic distance, resulting in the lattice breaking down into islands, a process called "dissolving". Graphene broke this argument by displaying extremely small interatomic distances, which translated as improved electron-sharing to form strong covalent bonds that didn't break even at elevated temperatures.

As Andre Geim and Konstantin Novoselov, experimental discoverers of graphene and joint winners of the 2010 Nobel Prize in physics, wrote in 2007:
The relativistic-like description of electron waves on honeycomb lattices has been known theoretically for many years, never failing to attract attention, and the experimental discovery of graphene now provides a way to probe quantum electrodynamics (QED) phenomena by measuring graphene’s electronic properties.

(On a tabletop for cryin' out loud.)

What's more, because of a tendency to localize electrons faster than could conventional devices, using lasers to activate the photoelectric effect in graphene resulted in electric currents (i.e., moving electrons) forming within picoseconds (photons in the laser pulse knocked out electrons, which then traveled to the nearest location in the lattice where it could settle down, leaving a "hole" in its wake that would pull in the next electron, and so forth). Just because of this, graphene could make for an excellent photodetector, capable of picking up on small "amounts" of eM radiation quickly.



An enhanced current generation rate could also be read as a better electron-transfer rate, with big implications for artificial photosynthesis. The conversion of carbon dioxide to formic acid requires a catalyst that operates in the visible range to provide electrons to an enzyme that its coupled with. The enzyme then reacts with the carbon dioxide to yield the acid. Graphene, a team of South Korean scientists observed in early July, played the role of that catalyst with higher efficiency than its peers in the visible range of the eM spectrum, as well as offering up a higher surface area over which electron-transfer could occur.

Another potential area of application is in the design and development of non-volatile magnetic memories for higher efficiency computers. A computer usually has two kinds of memories: a faster, volatile memory that can store data only when connected to a power source, and a non-volatile memory that stores data even when power to it is switched off. A lot of the power consumed by computers is spent in transferring data between these two memories during operation. This leads to an undesirable difference arising between a computer's optimum efficiency and its operational efficiency. To solve for this, a Singaporean team of scientists hit upon the use of two electrically conducting films separated by an insulating layer to develop a magnetic resistance between them on application of a spin-polarized electric field to them.

The resistance is highest when the direction of the magnetic field is anti-parallel (i.e., pointing in opposite directions) in the two films, and lowest when the field is parallel. This sandwiching arrangement is subsequently divided into cells, with each cell possessing some magnetic resistance in which data is stored. For maximal data storage, the fields would have to be anti-parallel as well as that the films' material spin-polarizability high. Here again, graphene was found to be a suitable material. In fact, in much the same vein, this wonder of an allotrope could also have some role to play in replacing existing tunnel-junctions materials such as aluminium oxide and magnesium oxide because of its lower electrical resistance per unit area, absence of surface defects, prohibition of interdiffusion at interfaces, and uniform thickness.

In essence, graphene doesn't only replace existing materials to enhance a product's (or process's) mechanical and electrical properties, but also brings along an opportunity to redefine what the product can do and what it could evolve into in the future. In this regard, it far surpasses existing results of research in materials engineering: instead of forging swords, scientists working with graphene can now forge the battle itself. This isn't surprising at all considering graphene's properties are most effective for nano-electromechanical applications (there have been talks of a graphene-based room-temperature superconductor). More precise measurements of their values should open up a trove of new fields, and possible hiding locations of similar materials, altogether.

Sunday, 27 November 2011

Just because they’re less dangerous than nuclear power doesn’t mean they aren’t dangerous at all.

Alternate Sources of Energy (ASE) are any sources of energy that replace existing fuel sources without the same undesirable consequences. They are intended to replace fossil fuels, other high emitters of carbon dioxide and nuclear energy. The primary purpose of an ASE is to provide clean energy at a higher efficiency than that of conventional energy sources. While they are frequently touted to be the future, many of their demerits lack ample media representation or are ignored simply because they’re less dangerous than radioactive waste from nuclear power plants.

However, with their increasing media presence, journalists need to be aware of the right questions to ask as well as some of the problems that are specific to ASEs. Here are the properties and disadvantages of five alternate sources of energy.

  1. Solar energy

  2. Wind energy

  3. Geothermal energy

  4. Biofuels

  5. Hydrogen


COMMON DEMERITS

Most ASEs have some common disadvantages that come with the fact of being “alternate”

  1. Cost – For the same amount of money, the amount of energy delivered is lower. If the investment in the energy sector can’t be increased, then growth rates will have to be brought down.

  2. Dependence on international supplies – Many ASEs requires raw materials that are situated outside the region of need. This dependence is also influenced by local factors, explained in the next point.

  3. Influence of local factors – In order to sustain the ASEs, local industries will have to absorb the demands placed on it for research and technology. Therefore, what ASE is consumed in the region will depend on what resources the region already has.


SOLAR ENERGY

Solar energy is harvested using solar cells

Each cell is a thin wafer of monocrystalline silicon that is implanted with electrodes. When photons in sunlight enter the silicon atom, they knock out an electron – this is called the photoelectric effect. The electron is then captured by the electrodes to transmit a small electric current.

A single solar cell can produce enough electrical energy to power one household in 24 years or more. Instead, huge solar farms have to be built, consisting of arrays of modules of cells, to provide for the hundreds of megawatts that nations need today.

DEMERITS

  1. Solar cells work at an efficiency of 14 per cent at a temperature of 25 degrees Celsius

  2. This ideal temperature will not be available at all points of time, which means the efficiency is going to be less than 14 per cent most of the time

  3. Chennai receives about 6 kWh/m2/day – which means 0.84 kWh/m2/day will be produced in one day by a solar panel measuring 1 sq. metre

  4. The cost of energy production is $3.4/watt.

  5. For Tamil Nadu, which currently faces a 659-megawatt shortage, the cost of production will be $2.24 billion.

  6. Solar cells produce only direct current, or DC, which is not directly practicable. It has to be converted to AC, or alternating current, first, which will add further to the cost.

  7. To produce energy at a higher efficiency, metallic alloys instead of monocrystalline silicon will have to be used. Some examples are:

  8. Cadmium telluride - $1.76-2.48/watt, $550/kg

  9. Copper indium gallium selenide - $1.15/watt, ~$25,000/kg

  10. Gallium arsenide – $0.86-1/watt, $1,640/kg

  11. Of these, cadmium, tellurium, gallium, selenium and arsenic are all highly toxic poisons with known teratogenic effects (teratogenic means across multiple generations)

  12. All energy sources become feasible only when they can provide a continuous supply of energy. For solar farms to be able to do this, suitable storage systems will have to be provided. Further, solar cells are useless during the rainy seasons, and when sufficient sunlight is not available to provide any useful amount of energy.


WIND ENERGY

Wind energy is harvested using windmills

A windmill is an ensemble of a steel tower, the blades and the wind turbine: when the blades are rotated by the kinetic energy of the wind, a turbine converts the kinetic energy of the moving blades into electrical energy.

DEMERITS

  1. Wind stations are often built to generate different amounts of power depending on the wind speeds at where they’re located. Consequently, the sub-station grids that store the power temporarily must be equipped to support different amounts of power on the same transmission line. For this, they require something called a capacitor farm – which is extremely expensive to set up. But this is only a minor demerit, although it is something you won’t find politicians talking about.

  2. Wind stations work at an efficiency from 20 per cent to 40 per cent that depends on the wind speeds, and given the cost of each watt of energy is $2.12, one megawatt of output will require $8.5 million to $10.2 million worth of input. Even though the figure of a little over two dollars per watt is low, it has increased by 9 per cent from last year despite an increase in the demand for wind turbines.

  3. Wind farms produce a large volume of infrasonic sound which interferes with the human sensory system, causing nausea, severe headaches, temporary deafness, hallucinations and temporary blindness amongst all age groups of people.

  4. Current levels of research point that that wind speed doesn’t always increase power production. In fact, the chart shows that the energy generation as at its highest when the wind speed is significantly lower. As a journalist, when you’re writing a story on wind power, be careful not to consider the “high wind speeds” – first, compare the rated energy output of the turbine and then look for the corresponding wind speed


[caption id="attachment_20821" align="aligncenter" width="504" caption="Image from Wikipedia"][/caption]

GEOTHERMAL ENERGY

Geothermal energy is the heat energy of the earth.

There is a continuous loss of heat from the earth’s core into the mantle and the crust. There are also radioactive materials that contribute to the heating. The energy is stored in compacted rocks, underground water bodies, and subterranean air currents. In order to retrieve the energy, cold water is pumped down toward the hot bedrock and pumped back up again as steam, which is used to power a turbine.

Philippines, Iceland and El Salvador each produce between 25 to 30 per cent of their electricity from geothermal power plants

DEMERITS

  1. In order to find sources of geothermal energy, drilling and mining have to be deployed on a large scale, apart from scouting for underground heat sources with aerially deployed probes like satellites.

  2. Drilling costs are significantly high at $2.2 per watt

  3. There are high failure rates associated with drilling because 80 per cent of all geothermal energy is due to radioactive decay, which is hard to detect or determine from space. The lowest failure rate in the world is in Nevada, USA: 1 in 5 drills will find nothing of value underground.

  4. Geothermal plants that have to deliver in megawatts need to have sufficient infrastructure to support the continuous mining and pumping of water and steam. Effectively, the total averaged cost comes to $4 per watt – significantly higher than the cost of other ASEs

  5. The best geothermal sources are those near tectonic plates – any seismological activity will pose a significant risk to the plant and to those dependent on energy from the plant

  6. Underground air currents that are trapped in geothermal wells are released when the pockets are mined. These air currents are composed of 90 per cent methane, 5 per cent carbon dioxide, and other gases – these are greenhouse gases

  7. Releasing them into the atmosphere adds to global warming

  8. Removing them from under the soil destroys the soil composition and alters the ecosystem

  9. Since methane is lighter than air, the density of air above a geothermal power plant will be reduced, making the skies in that area unsafe for air travel


BIOFUELS

Biofuels are fuels that have some amount of carbon that recently originated from a carbon cycle, i.e. derived from an organic source

Biomass is solid biofuel and is derived from wood, sawdust, grass trimmings, domestic refuse, charcoal, agricultural waste, and dried manure. These products are compacted to increase their density and used as pellets which can be combusted.

Liquid biofuel includes methanol and ethanol. Ethanol is mixed with gasoline at 1:10 to increase the octane number of the fuel. Higher the octane number, the more the fuel can compress before detonating, the more energy is released per volume of fuel. Methanol can be used directly as engine fuel.

Biogases are those produced when organic matter is broken down by bacteria in the absence of oxygen. It mostly comprises of methane, carbon dioxide, hydrogen sulphide and siloxanes (compounds of carbon and silicon) – biofuels can be burnt to release about 19.7 megajoule per kilogram

DEMERITS

  1. Various environmental models have been discussed that illustrate the merits of biofuels, incl. high oil prices, poverty reduction potential, sustainable biofuel production, and low cost.

  2. All biofuels have a lower energy content than hydrocarbon fossil fuels – which means to produce the same amount of energy, a higher volume of biofuels will have to be used

  3. Methanol and ethanol are basic in nature and produce acidic contaminants upon combustion, which then corrode the valves and transmission ducts of the vehicle

  4. Methanol is hygroscopic – it absorbs moisture directly from the atmosphere – and so dilutes itself if not handled properly. This also increases the wetness of by-products of methanol combustion

  5. Even though biofuels produce no smoke when combusted, they contain more than 20 times as much greenhouses gases as fossil fuels – which means they will contribute more to global warming than the fuels they replace

  6. In order to produce larger quantities of biofuels, larger quantities of resources are necessary

  7. More water is needed

  8. More land is needed

  9. Increase in biofuel production will place some stress on agricultural output and water resources, resulting in an increase in the prices of vegetables, etc.

  10. Volatile organic compounds present in biogas, upon exposure to sunlight, react with atmospheric nitrogen to form tropospheric ozone, peroxyacyl nitrates and nitrogen dioxide – this miasma is commonly called a photochemical smog and causes emphysema, bronchitis and asthma


HYDROGEN

Hydrogen is the lightest element known to man and comprises approximately 75 per cent of the known Universe

Hydrogen is not a source of energy, like coal or the sun, but a carrier of energy, like light and electricity

The source of hydrogen’s energy comes form its extremely acidic nature and the way it explosively combines with oxygen to form water vapour

DEMERITS

  1. The catalysts required to break down hydrogen, platinum and zirconium, are extremely expensive – an industrial alternative is to compress water to extremely high pressures and send an electric current through it and break it down into H and O – in this case, the compressor requires large amounts of energy

  2. Hydrogen costs $4 per kilogram at its purest and $1.40 per kilogram when it is derived from natural gas

  3. Once hydrogen has been obtained, it can be stored, transported and recombined at another location to yield large amounts of energy.

  4. In its natural gaseous form, every kilogram of hydrogen occupies an 89-litre tank – which is comparable to the fuel tank of a large truck

  5. Hydrogen can be compressed and liquefied to a liquid form, but an onboard cryogenic storage unit will consume large amounts of power.

  6. Hydrogen storage tanks have to be significantly stronger, and heavier, than normal tanks because high-pressure H has a tendency to corrode metals and leak into the atmosphere, where it explodes in contact with air






Just because they’re less dangerous than nuclear power doesn’t mean they aren’t dangerous at all.

Alternate Sources of Energy (ASE) are any sources of energy that replace existing fuel sources without the same undesirable consequences. They are intended to replace fossil fuels, other high emitters of carbon dioxide and nuclear energy. The primary purpose of an ASE is to provide clean energy at a higher efficiency than that of conventional energy sources. While they are frequently touted to be the future, many of their demerits lack ample media representation or are ignored simply because they’re less dangerous than radioactive waste from nuclear power plants.

However, with their increasing media presence, journalists need to be aware of the right questions to ask as well as some of the problems that are specific to ASEs. Here are the properties and disadvantages of five alternate sources of energy.

  1. Solar energy

  2. Wind energy

  3. Geothermal energy

  4. Biofuels

  5. Hydrogen


COMMON DEMERITS

Most ASEs have some common disadvantages that come with the fact of being “alternate”

  1. Cost – For the same amount of money, the amount of energy delivered is lower. If the investment in the energy sector can’t be increased, then growth rates will have to be brought down.

  2. Dependence on international supplies – Many ASEs requires raw materials that are situated outside the region of need. This dependence is also influenced by local factors, explained in the next point.

  3. Influence of local factors – In order to sustain the ASEs, local industries will have to absorb the demands placed on it for research and technology. Therefore, what ASE is consumed in the region will depend on what resources the region already has.


SOLAR ENERGY

Solar energy is harvested using solar cells

Each cell is a thin wafer of monocrystalline silicon that is implanted with electrodes. When photons in sunlight enter the silicon atom, they knock out an electron – this is called the photoelectric effect. The electron is then captured by the electrodes to transmit a small electric current.

A single solar cell can produce enough electrical energy to power one household in 24 years or more. Instead, huge solar farms have to be built, consisting of arrays of modules of cells, to provide for the hundreds of megawatts that nations need today.

DEMERITS

  1. Solar cells work at an efficiency of 14 per cent at a temperature of 25 degrees Celsius

  2. This ideal temperature will not be available at all points of time, which means the efficiency is going to be less than 14 per cent most of the time

  3. Chennai receives about 6 kWh/m2/day – which means 0.84 kWh/m2/day will be produced in one day by a solar panel measuring 1 sq. metre

  4. The cost of energy production is $3.4/watt.

  5. For Tamil Nadu, which currently faces a 659-megawatt shortage, the cost of production will be $2.24 billion.

  6. Solar cells produce only direct current, or DC, which is not directly practicable. It has to be converted to AC, or alternating current, first, which will add further to the cost.

  7. To produce energy at a higher efficiency, metallic alloys instead of monocrystalline silicon will have to be used. Some examples are:

  8. Cadmium telluride - $1.76-2.48/watt, $550/kg

  9. Copper indium gallium selenide - $1.15/watt, ~$25,000/kg

  10. Gallium arsenide – $0.86-1/watt, $1,640/kg

  11. Of these, cadmium, tellurium, gallium, selenium and arsenic are all highly toxic poisons with known teratogenic effects (teratogenic means across multiple generations)

  12. All energy sources become feasible only when they can provide a continuous supply of energy. For solar farms to be able to do this, suitable storage systems will have to be provided. Further, solar cells are useless during the rainy seasons, and when sufficient sunlight is not available to provide any useful amount of energy.


WIND ENERGY

Wind energy is harvested using windmills

A windmill is an ensemble of a steel tower, the blades and the wind turbine: when the blades are rotated by the kinetic energy of the wind, a turbine converts the kinetic energy of the moving blades into electrical energy.

DEMERITS

  1. Wind stations are often built to generate different amounts of power depending on the wind speeds at where they’re located. Consequently, the sub-station grids that store the power temporarily must be equipped to support different amounts of power on the same transmission line. For this, they require something called a capacitor farm – which is extremely expensive to set up. But this is only a minor demerit, although it is something you won’t find politicians talking about.

  2. Wind stations work at an efficiency from 20 per cent to 40 per cent that depends on the wind speeds, and given the cost of each watt of energy is $2.12, one megawatt of output will require $8.5 million to $10.2 million worth of input. Even though the figure of a little over two dollars per watt is low, it has increased by 9 per cent from last year despite an increase in the demand for wind turbines.

  3. Wind farms produce a large volume of infrasonic sound which interferes with the human sensory system, causing nausea, severe headaches, temporary deafness, hallucinations and temporary blindness amongst all age groups of people.

  4. Current levels of research point that that wind speed doesn’t always increase power production. In fact, the chart shows that the energy generation as at its highest when the wind speed is significantly lower. As a journalist, when you’re writing a story on wind power, be careful not to consider the “high wind speeds” – first, compare the rated energy output of the turbine and then look for the corresponding wind speed


[caption id="attachment_20821" align="aligncenter" width="504" caption="Image from Wikipedia"][/caption]

GEOTHERMAL ENERGY

Geothermal energy is the heat energy of the earth.

There is a continuous loss of heat from the earth’s core into the mantle and the crust. There are also radioactive materials that contribute to the heating. The energy is stored in compacted rocks, underground water bodies, and subterranean air currents. In order to retrieve the energy, cold water is pumped down toward the hot bedrock and pumped back up again as steam, which is used to power a turbine.

Philippines, Iceland and El Salvador each produce between 25 to 30 per cent of their electricity from geothermal power plants

DEMERITS

  1. In order to find sources of geothermal energy, drilling and mining have to be deployed on a large scale, apart from scouting for underground heat sources with aerially deployed probes like satellites.

  2. Drilling costs are significantly high at $2.2 per watt

  3. There are high failure rates associated with drilling because 80 per cent of all geothermal energy is due to radioactive decay, which is hard to detect or determine from space. The lowest failure rate in the world is in Nevada, USA: 1 in 5 drills will find nothing of value underground.

  4. Geothermal plants that have to deliver in megawatts need to have sufficient infrastructure to support the continuous mining and pumping of water and steam. Effectively, the total averaged cost comes to $4 per watt – significantly higher than the cost of other ASEs

  5. The best geothermal sources are those near tectonic plates – any seismological activity will pose a significant risk to the plant and to those dependent on energy from the plant

  6. Underground air currents that are trapped in geothermal wells are released when the pockets are mined. These air currents are composed of 90 per cent methane, 5 per cent carbon dioxide, and other gases – these are greenhouse gases

  7. Releasing them into the atmosphere adds to global warming

  8. Removing them from under the soil destroys the soil composition and alters the ecosystem

  9. Since methane is lighter than air, the density of air above a geothermal power plant will be reduced, making the skies in that area unsafe for air travel


BIOFUELS

Biofuels are fuels that have some amount of carbon that recently originated from a carbon cycle, i.e. derived from an organic source

Biomass is solid biofuel and is derived from wood, sawdust, grass trimmings, domestic refuse, charcoal, agricultural waste, and dried manure. These products are compacted to increase their density and used as pellets which can be combusted.

Liquid biofuel includes methanol and ethanol. Ethanol is mixed with gasoline at 1:10 to increase the octane number of the fuel. Higher the octane number, the more the fuel can compress before detonating, the more energy is released per volume of fuel. Methanol can be used directly as engine fuel.

Biogases are those produced when organic matter is broken down by bacteria in the absence of oxygen. It mostly comprises of methane, carbon dioxide, hydrogen sulphide and siloxanes (compounds of carbon and silicon) – biofuels can be burnt to release about 19.7 megajoule per kilogram

DEMERITS

  1. Various environmental models have been discussed that illustrate the merits of biofuels, incl. high oil prices, poverty reduction potential, sustainable biofuel production, and low cost.

  2. All biofuels have a lower energy content than hydrocarbon fossil fuels – which means to produce the same amount of energy, a higher volume of biofuels will have to be used

  3. Methanol and ethanol are basic in nature and produce acidic contaminants upon combustion, which then corrode the valves and transmission ducts of the vehicle

  4. Methanol is hygroscopic – it absorbs moisture directly from the atmosphere – and so dilutes itself if not handled properly. This also increases the wetness of by-products of methanol combustion

  5. Even though biofuels produce no smoke when combusted, they contain more than 20 times as much greenhouses gases as fossil fuels – which means they will contribute more to global warming than the fuels they replace

  6. In order to produce larger quantities of biofuels, larger quantities of resources are necessary

  7. More water is needed

  8. More land is needed

  9. Increase in biofuel production will place some stress on agricultural output and water resources, resulting in an increase in the prices of vegetables, etc.

  10. Volatile organic compounds present in biogas, upon exposure to sunlight, react with atmospheric nitrogen to form tropospheric ozone, peroxyacyl nitrates and nitrogen dioxide – this miasma is commonly called a photochemical smog and causes emphysema, bronchitis and asthma


HYDROGEN

Hydrogen is the lightest element known to man and comprises approximately 75 per cent of the known Universe

Hydrogen is not a source of energy, like coal or the sun, but a carrier of energy, like light and electricity

The source of hydrogen’s energy comes form its extremely acidic nature and the way it explosively combines with oxygen to form water vapour

DEMERITS

  1. The catalysts required to break down hydrogen, platinum and zirconium, are extremely expensive – an industrial alternative is to compress water to extremely high pressures and send an electric current through it and break it down into H and O – in this case, the compressor requires large amounts of energy

  2. Hydrogen costs $4 per kilogram at its purest and $1.40 per kilogram when it is derived from natural gas

  3. Once hydrogen has been obtained, it can be stored, transported and recombined at another location to yield large amounts of energy.

  4. In its natural gaseous form, every kilogram of hydrogen occupies an 89-litre tank – which is comparable to the fuel tank of a large truck

  5. Hydrogen can be compressed and liquefied to a liquid form, but an onboard cryogenic storage unit will consume large amounts of power.

  6. Hydrogen storage tanks have to be significantly stronger, and heavier, than normal tanks because high-pressure H has a tendency to corrode metals and leak into the atmosphere, where it explodes in contact with air