Pages

Showing posts with label hydrogen. Show all posts
Showing posts with label hydrogen. Show all posts

Saturday, 14 January 2012

LHC: Not one entity but a train of innovations.

Particle physics has jumped leaps and bounds to come to where it is now. Considering particle accelerators pre-1982 were restricted to either advanced linear colliders or colliders that studied particles whose lifetimes were nowhere as low as those of the Higgs boson or the four-quark hadrons, the last 30 years have witnessed an onslaught of exciting new technology focusing on accuracy, precision, speed and connectivity. These four attributes are together extremely important in the study of very-short-lived particles. However, separately, they are significant in a variety of industries, especially metrology, telecommunications, materials engineering, meteorology, cold storage and preservation, and diagnostics.

Let's break down the Large Hadron Collider (LHC) to see what parts it yields. The 27-km long tunnel is made of concrete and is 3.8 metres wide. Two beam pipes guide the particles around in opposite directions around the pipe at 0.999999991 times the speed of light (299,792,455 m/s), and when they've been sufficiently charged up, the particles can meet at one of four intersections between the pipes. To increase the chances of a head-on collision, 1,232 dipole magnets and 392 quadrupole magnets, arranged above and below the pipes, guide the particles around. The magnets weight a total of 12,500 tonnes, surpassed in size- only by the static ICAL detector's 50,000-ton magnet at India's INO.

[caption id="attachment_21265" align="aligncenter" width="448" caption="The Compact Muon Solenoid (CMS) detector in the LHC"][/caption]

These 1,624 magnets are electromagnetic, and as their magnetic properties are turned on and off a little more than 22,000 times per second, they heat up. They can't be allowed to do that, however, and to cool them down, 96 tonnes of liquid helium is used to maintain the chunks of niobium-titanium at 1.9 K. In all, the magnets store 10 giga-joules of energy while generating a magnetic field of up to 8.3 tesla (about 332,000 times stronger than the Earth's) as 323 trillion protons gear up at near-light speeds for what can only be called the bloodiest civil war.

This (cooling business) makes the LHC the largest cryogenic storage facility of all-time, a VLSI exemplar we can work with as we move into a future in which the storage and transportation of hydrogen for fuel cells is an increasingly difficult problem - and not just because we haven't given it enough thought yet. You see, hydrogen is extremely explosive in the presence of oxygen, and readily reacts to form steam and a large quantity of heat. Therefore, hydrogen has to be stored in leak-proof containers that are capable of withstanding big shocks. It also has to be stored in its liquid state because gaseous hydrogen has an extremely low density and a very low mass can occupy a large volume.

[caption id="attachment_21266" align="aligncenter" width="400" caption="The cross-section of a cryopump"][/caption]

At the LHC, the liquid helium is stored in copper-jacketed cryostats that maintain its temperature at 1.9 K, which is circulated by means of special pumps called cryopumps. These pumps are maintained at the very low temperatures they require to remain functional by the compressed helium itself. However, in case of hydrogen, a smaller cryocooler can be attached to the cryopump (alongside a sorption pump - but that's not important now) so it remains cold and damp, and it can be made readily available for use using a regenerative evaporation process.

Also, if you didn't know, the entire working principle of the LHC is encapsulated by cyclotrons that work with Magnetic Resonance Imaging (MRI), Positron Emission Tomography (PET) and Computerized Tomography (CT) scanners in medical diagnostics. Basically, these scanners detect the decay of radioactive isotope with a very short half-life (usually from 20 minutes to 110 minutes) within the human body through a series of beta decays and electron-positron annihilation events. They have a cyclotron in close proximity that generates these isotopes to be traced, and a cyclotron works on the same principle as the LHC: takes a charged particle, sets it on a curved path by exposing it to a magnetic field, continuously switches the direction of the magnetic field using two different sets of magnets that go on and off alternatively until the particle has sped up, and then releases it.

[caption id="attachment_21267" align="aligncenter" width="532" caption="A classical Lawrence cyclotron"][/caption]

Because the LHC's detectors have to detect high-energy collisions precisely, i.e. determine the particles' charge, mass and other quantum properties to within 99.99%, they have to have a high luminosity. Further, each detector is not just a place that receives signals and immediately interprets. Instead, the detector comprises of everything from the detection mechanism to the millions of read-out channels that transmit the data to supercomputing grids. This in-built capacity to work at various energies and with unpredictable scenarios gives the diagnostics and instrumentation industries a lot to work with when it comes to the fabrication of scanners.

As of now, the image reconstruction by the scanners is the most difficult task in the entire process. However, the technology has improved so much so that the different tracer isotopes and their biochemical reactions with tissues can be visualized even for previously sensitive sections of the body. In neuropsychiatry, for instance, a substance called a radioligand is used to label the dopamine, serotonin and opioid receptors in the brain. Then, using a suitable scanner like a PET, the levels and neurological pathways of these receptors are monitored over a period of time. Because the receptors' involvement in disorders like schizophrenia, Alzheimer's, substance abuse and mood disorders is significant, advancements in selecting the perfect radioligands, determining where what happens, and the ability to identify various kinds of reactions and reconstruct it in 3D can go a long way in finding suitable cures.

[caption id="attachment_21268" align="aligncenter" width="448" caption="In a PET scanner, an electron and an injected positron annihilate each other within the human body to produce two gamma rays. When these rays reach the scanner, they are recorded as a burst of light. The image of the brain from a PET scan, shown above, is reconstructed by orienting the scanner in various directions."][/caption]

The supercomputing grid mentioned earlier is an array of monstrously powerful computers assembled at the CERN capable of processing tens of gigabytes of data per second, churning out the results, ordering them, and then looking for patterns. At the same time, there is a Europe-wide project in place that allows volunteers to log in to the CERN server and permit the use of their computers' idle time for added computing power: LHC@home. Given that the CERN supercomputers are ahead in terms of computing complexity by orders of magnitude, I don't know how much computing power the initiative contributes that makes a difference. However, that such a concept exists is more important - especially considering we're in the era of cloud computing.

Another field of physics that requires such massive computing power is meteorology. The second-by-second reconstruction of weather patterns, cyclones, hurricanes, cloud movements, winds, rains, and ocean waves across a swath of oddly varying topographies with high unpredictability makes weather forecasting a superbitch. As of now, it is a task taken on only by national governments and prolific academic institutions. In this context, what if a computing grid existed that drew on volunteers' idle PC time to assist with the calculation and simulation of climactic patterns? This would reduce the load on existing computing infrastructure and release computation time for other calculations, making forecasting quicker. Or, interpreted another way, more timely.

Such tightly-coupled distributed programs, called clusters, work with advancements in running a synchronized algorithm in essentially asynchronized systems, and determination of event-sequences and logical cause-effects ("what happened when/what's next"). Clusters also have to address a host of other issues, and understanding how the CERN is dealing with them everyday will provide invaluable insight into setting up such systems in other industries, too. These issues include achievement of overall system reliability in the presence of some faulty processes, keeping the signal-to-noise (SNR) ratio down at all points by self-stabilization to reduce the amount of error, and the Two Generals' Problem.

[caption id="attachment_21269" align="aligncenter" width="438" caption="In the Two Generals' Problem, two generals await with their armies on either sides of a valley. The people they wait to conquer are in the valley, and the invasion will be successful only if both generals attack at the same time. Since sending a messenger through the valley could result in his being "flipped" or lost, a potentially infinite number of messengers will be required to confirm a time of action."][/caption]

These are only some of the innovations that the CERN has pioneered. In the 1980s, nobody could have foreseen the onset of such a spur of improvements because, then, the LHC was just a particle accelerator. As the years went by and the demands of the physics community grew, the countries came together to exercise their strengths and contribute to this project an idea, a component or some other service, anything that each country was at the forefront of. Moreover, the LHC also pushed regional and associated industries to toughen up, encourage research in niche areas, and think of better ways to make any idea reality quickly. And when the internet came up and opened the world up more than globalization had, one country's contributions proved to be another country's solutions, and the communications gap that had existed between them for all those years was broken by the experiment.

I'm sure there's a lot more to the collider, but the purpose of this endeavour was to illustrate that the LHC is not just a physics thing. Its contribution to mankind long surpassed the hunt for the Higgs boson: today, collider technology is everywhere from baby diapers and cereal boxes to X-ray spectroscopy, Bose-Einstein statistics and superconductors.

Sunday, 27 November 2011

Just because they’re less dangerous than nuclear power doesn’t mean they aren’t dangerous at all.

Alternate Sources of Energy (ASE) are any sources of energy that replace existing fuel sources without the same undesirable consequences. They are intended to replace fossil fuels, other high emitters of carbon dioxide and nuclear energy. The primary purpose of an ASE is to provide clean energy at a higher efficiency than that of conventional energy sources. While they are frequently touted to be the future, many of their demerits lack ample media representation or are ignored simply because they’re less dangerous than radioactive waste from nuclear power plants.

However, with their increasing media presence, journalists need to be aware of the right questions to ask as well as some of the problems that are specific to ASEs. Here are the properties and disadvantages of five alternate sources of energy.

  1. Solar energy

  2. Wind energy

  3. Geothermal energy

  4. Biofuels

  5. Hydrogen


COMMON DEMERITS

Most ASEs have some common disadvantages that come with the fact of being “alternate”

  1. Cost – For the same amount of money, the amount of energy delivered is lower. If the investment in the energy sector can’t be increased, then growth rates will have to be brought down.

  2. Dependence on international supplies – Many ASEs requires raw materials that are situated outside the region of need. This dependence is also influenced by local factors, explained in the next point.

  3. Influence of local factors – In order to sustain the ASEs, local industries will have to absorb the demands placed on it for research and technology. Therefore, what ASE is consumed in the region will depend on what resources the region already has.


SOLAR ENERGY

Solar energy is harvested using solar cells

Each cell is a thin wafer of monocrystalline silicon that is implanted with electrodes. When photons in sunlight enter the silicon atom, they knock out an electron – this is called the photoelectric effect. The electron is then captured by the electrodes to transmit a small electric current.

A single solar cell can produce enough electrical energy to power one household in 24 years or more. Instead, huge solar farms have to be built, consisting of arrays of modules of cells, to provide for the hundreds of megawatts that nations need today.

DEMERITS

  1. Solar cells work at an efficiency of 14 per cent at a temperature of 25 degrees Celsius

  2. This ideal temperature will not be available at all points of time, which means the efficiency is going to be less than 14 per cent most of the time

  3. Chennai receives about 6 kWh/m2/day – which means 0.84 kWh/m2/day will be produced in one day by a solar panel measuring 1 sq. metre

  4. The cost of energy production is $3.4/watt.

  5. For Tamil Nadu, which currently faces a 659-megawatt shortage, the cost of production will be $2.24 billion.

  6. Solar cells produce only direct current, or DC, which is not directly practicable. It has to be converted to AC, or alternating current, first, which will add further to the cost.

  7. To produce energy at a higher efficiency, metallic alloys instead of monocrystalline silicon will have to be used. Some examples are:

  8. Cadmium telluride - $1.76-2.48/watt, $550/kg

  9. Copper indium gallium selenide - $1.15/watt, ~$25,000/kg

  10. Gallium arsenide – $0.86-1/watt, $1,640/kg

  11. Of these, cadmium, tellurium, gallium, selenium and arsenic are all highly toxic poisons with known teratogenic effects (teratogenic means across multiple generations)

  12. All energy sources become feasible only when they can provide a continuous supply of energy. For solar farms to be able to do this, suitable storage systems will have to be provided. Further, solar cells are useless during the rainy seasons, and when sufficient sunlight is not available to provide any useful amount of energy.


WIND ENERGY

Wind energy is harvested using windmills

A windmill is an ensemble of a steel tower, the blades and the wind turbine: when the blades are rotated by the kinetic energy of the wind, a turbine converts the kinetic energy of the moving blades into electrical energy.

DEMERITS

  1. Wind stations are often built to generate different amounts of power depending on the wind speeds at where they’re located. Consequently, the sub-station grids that store the power temporarily must be equipped to support different amounts of power on the same transmission line. For this, they require something called a capacitor farm – which is extremely expensive to set up. But this is only a minor demerit, although it is something you won’t find politicians talking about.

  2. Wind stations work at an efficiency from 20 per cent to 40 per cent that depends on the wind speeds, and given the cost of each watt of energy is $2.12, one megawatt of output will require $8.5 million to $10.2 million worth of input. Even though the figure of a little over two dollars per watt is low, it has increased by 9 per cent from last year despite an increase in the demand for wind turbines.

  3. Wind farms produce a large volume of infrasonic sound which interferes with the human sensory system, causing nausea, severe headaches, temporary deafness, hallucinations and temporary blindness amongst all age groups of people.

  4. Current levels of research point that that wind speed doesn’t always increase power production. In fact, the chart shows that the energy generation as at its highest when the wind speed is significantly lower. As a journalist, when you’re writing a story on wind power, be careful not to consider the “high wind speeds” – first, compare the rated energy output of the turbine and then look for the corresponding wind speed


[caption id="attachment_20821" align="aligncenter" width="504" caption="Image from Wikipedia"][/caption]

GEOTHERMAL ENERGY

Geothermal energy is the heat energy of the earth.

There is a continuous loss of heat from the earth’s core into the mantle and the crust. There are also radioactive materials that contribute to the heating. The energy is stored in compacted rocks, underground water bodies, and subterranean air currents. In order to retrieve the energy, cold water is pumped down toward the hot bedrock and pumped back up again as steam, which is used to power a turbine.

Philippines, Iceland and El Salvador each produce between 25 to 30 per cent of their electricity from geothermal power plants

DEMERITS

  1. In order to find sources of geothermal energy, drilling and mining have to be deployed on a large scale, apart from scouting for underground heat sources with aerially deployed probes like satellites.

  2. Drilling costs are significantly high at $2.2 per watt

  3. There are high failure rates associated with drilling because 80 per cent of all geothermal energy is due to radioactive decay, which is hard to detect or determine from space. The lowest failure rate in the world is in Nevada, USA: 1 in 5 drills will find nothing of value underground.

  4. Geothermal plants that have to deliver in megawatts need to have sufficient infrastructure to support the continuous mining and pumping of water and steam. Effectively, the total averaged cost comes to $4 per watt – significantly higher than the cost of other ASEs

  5. The best geothermal sources are those near tectonic plates – any seismological activity will pose a significant risk to the plant and to those dependent on energy from the plant

  6. Underground air currents that are trapped in geothermal wells are released when the pockets are mined. These air currents are composed of 90 per cent methane, 5 per cent carbon dioxide, and other gases – these are greenhouse gases

  7. Releasing them into the atmosphere adds to global warming

  8. Removing them from under the soil destroys the soil composition and alters the ecosystem

  9. Since methane is lighter than air, the density of air above a geothermal power plant will be reduced, making the skies in that area unsafe for air travel


BIOFUELS

Biofuels are fuels that have some amount of carbon that recently originated from a carbon cycle, i.e. derived from an organic source

Biomass is solid biofuel and is derived from wood, sawdust, grass trimmings, domestic refuse, charcoal, agricultural waste, and dried manure. These products are compacted to increase their density and used as pellets which can be combusted.

Liquid biofuel includes methanol and ethanol. Ethanol is mixed with gasoline at 1:10 to increase the octane number of the fuel. Higher the octane number, the more the fuel can compress before detonating, the more energy is released per volume of fuel. Methanol can be used directly as engine fuel.

Biogases are those produced when organic matter is broken down by bacteria in the absence of oxygen. It mostly comprises of methane, carbon dioxide, hydrogen sulphide and siloxanes (compounds of carbon and silicon) – biofuels can be burnt to release about 19.7 megajoule per kilogram

DEMERITS

  1. Various environmental models have been discussed that illustrate the merits of biofuels, incl. high oil prices, poverty reduction potential, sustainable biofuel production, and low cost.

  2. All biofuels have a lower energy content than hydrocarbon fossil fuels – which means to produce the same amount of energy, a higher volume of biofuels will have to be used

  3. Methanol and ethanol are basic in nature and produce acidic contaminants upon combustion, which then corrode the valves and transmission ducts of the vehicle

  4. Methanol is hygroscopic – it absorbs moisture directly from the atmosphere – and so dilutes itself if not handled properly. This also increases the wetness of by-products of methanol combustion

  5. Even though biofuels produce no smoke when combusted, they contain more than 20 times as much greenhouses gases as fossil fuels – which means they will contribute more to global warming than the fuels they replace

  6. In order to produce larger quantities of biofuels, larger quantities of resources are necessary

  7. More water is needed

  8. More land is needed

  9. Increase in biofuel production will place some stress on agricultural output and water resources, resulting in an increase in the prices of vegetables, etc.

  10. Volatile organic compounds present in biogas, upon exposure to sunlight, react with atmospheric nitrogen to form tropospheric ozone, peroxyacyl nitrates and nitrogen dioxide – this miasma is commonly called a photochemical smog and causes emphysema, bronchitis and asthma


HYDROGEN

Hydrogen is the lightest element known to man and comprises approximately 75 per cent of the known Universe

Hydrogen is not a source of energy, like coal or the sun, but a carrier of energy, like light and electricity

The source of hydrogen’s energy comes form its extremely acidic nature and the way it explosively combines with oxygen to form water vapour

DEMERITS

  1. The catalysts required to break down hydrogen, platinum and zirconium, are extremely expensive – an industrial alternative is to compress water to extremely high pressures and send an electric current through it and break it down into H and O – in this case, the compressor requires large amounts of energy

  2. Hydrogen costs $4 per kilogram at its purest and $1.40 per kilogram when it is derived from natural gas

  3. Once hydrogen has been obtained, it can be stored, transported and recombined at another location to yield large amounts of energy.

  4. In its natural gaseous form, every kilogram of hydrogen occupies an 89-litre tank – which is comparable to the fuel tank of a large truck

  5. Hydrogen can be compressed and liquefied to a liquid form, but an onboard cryogenic storage unit will consume large amounts of power.

  6. Hydrogen storage tanks have to be significantly stronger, and heavier, than normal tanks because high-pressure H has a tendency to corrode metals and leak into the atmosphere, where it explodes in contact with air






Just because they’re less dangerous than nuclear power doesn’t mean they aren’t dangerous at all.

Alternate Sources of Energy (ASE) are any sources of energy that replace existing fuel sources without the same undesirable consequences. They are intended to replace fossil fuels, other high emitters of carbon dioxide and nuclear energy. The primary purpose of an ASE is to provide clean energy at a higher efficiency than that of conventional energy sources. While they are frequently touted to be the future, many of their demerits lack ample media representation or are ignored simply because they’re less dangerous than radioactive waste from nuclear power plants.

However, with their increasing media presence, journalists need to be aware of the right questions to ask as well as some of the problems that are specific to ASEs. Here are the properties and disadvantages of five alternate sources of energy.

  1. Solar energy

  2. Wind energy

  3. Geothermal energy

  4. Biofuels

  5. Hydrogen


COMMON DEMERITS

Most ASEs have some common disadvantages that come with the fact of being “alternate”

  1. Cost – For the same amount of money, the amount of energy delivered is lower. If the investment in the energy sector can’t be increased, then growth rates will have to be brought down.

  2. Dependence on international supplies – Many ASEs requires raw materials that are situated outside the region of need. This dependence is also influenced by local factors, explained in the next point.

  3. Influence of local factors – In order to sustain the ASEs, local industries will have to absorb the demands placed on it for research and technology. Therefore, what ASE is consumed in the region will depend on what resources the region already has.


SOLAR ENERGY

Solar energy is harvested using solar cells

Each cell is a thin wafer of monocrystalline silicon that is implanted with electrodes. When photons in sunlight enter the silicon atom, they knock out an electron – this is called the photoelectric effect. The electron is then captured by the electrodes to transmit a small electric current.

A single solar cell can produce enough electrical energy to power one household in 24 years or more. Instead, huge solar farms have to be built, consisting of arrays of modules of cells, to provide for the hundreds of megawatts that nations need today.

DEMERITS

  1. Solar cells work at an efficiency of 14 per cent at a temperature of 25 degrees Celsius

  2. This ideal temperature will not be available at all points of time, which means the efficiency is going to be less than 14 per cent most of the time

  3. Chennai receives about 6 kWh/m2/day – which means 0.84 kWh/m2/day will be produced in one day by a solar panel measuring 1 sq. metre

  4. The cost of energy production is $3.4/watt.

  5. For Tamil Nadu, which currently faces a 659-megawatt shortage, the cost of production will be $2.24 billion.

  6. Solar cells produce only direct current, or DC, which is not directly practicable. It has to be converted to AC, or alternating current, first, which will add further to the cost.

  7. To produce energy at a higher efficiency, metallic alloys instead of monocrystalline silicon will have to be used. Some examples are:

  8. Cadmium telluride - $1.76-2.48/watt, $550/kg

  9. Copper indium gallium selenide - $1.15/watt, ~$25,000/kg

  10. Gallium arsenide – $0.86-1/watt, $1,640/kg

  11. Of these, cadmium, tellurium, gallium, selenium and arsenic are all highly toxic poisons with known teratogenic effects (teratogenic means across multiple generations)

  12. All energy sources become feasible only when they can provide a continuous supply of energy. For solar farms to be able to do this, suitable storage systems will have to be provided. Further, solar cells are useless during the rainy seasons, and when sufficient sunlight is not available to provide any useful amount of energy.


WIND ENERGY

Wind energy is harvested using windmills

A windmill is an ensemble of a steel tower, the blades and the wind turbine: when the blades are rotated by the kinetic energy of the wind, a turbine converts the kinetic energy of the moving blades into electrical energy.

DEMERITS

  1. Wind stations are often built to generate different amounts of power depending on the wind speeds at where they’re located. Consequently, the sub-station grids that store the power temporarily must be equipped to support different amounts of power on the same transmission line. For this, they require something called a capacitor farm – which is extremely expensive to set up. But this is only a minor demerit, although it is something you won’t find politicians talking about.

  2. Wind stations work at an efficiency from 20 per cent to 40 per cent that depends on the wind speeds, and given the cost of each watt of energy is $2.12, one megawatt of output will require $8.5 million to $10.2 million worth of input. Even though the figure of a little over two dollars per watt is low, it has increased by 9 per cent from last year despite an increase in the demand for wind turbines.

  3. Wind farms produce a large volume of infrasonic sound which interferes with the human sensory system, causing nausea, severe headaches, temporary deafness, hallucinations and temporary blindness amongst all age groups of people.

  4. Current levels of research point that that wind speed doesn’t always increase power production. In fact, the chart shows that the energy generation as at its highest when the wind speed is significantly lower. As a journalist, when you’re writing a story on wind power, be careful not to consider the “high wind speeds” – first, compare the rated energy output of the turbine and then look for the corresponding wind speed


[caption id="attachment_20821" align="aligncenter" width="504" caption="Image from Wikipedia"][/caption]

GEOTHERMAL ENERGY

Geothermal energy is the heat energy of the earth.

There is a continuous loss of heat from the earth’s core into the mantle and the crust. There are also radioactive materials that contribute to the heating. The energy is stored in compacted rocks, underground water bodies, and subterranean air currents. In order to retrieve the energy, cold water is pumped down toward the hot bedrock and pumped back up again as steam, which is used to power a turbine.

Philippines, Iceland and El Salvador each produce between 25 to 30 per cent of their electricity from geothermal power plants

DEMERITS

  1. In order to find sources of geothermal energy, drilling and mining have to be deployed on a large scale, apart from scouting for underground heat sources with aerially deployed probes like satellites.

  2. Drilling costs are significantly high at $2.2 per watt

  3. There are high failure rates associated with drilling because 80 per cent of all geothermal energy is due to radioactive decay, which is hard to detect or determine from space. The lowest failure rate in the world is in Nevada, USA: 1 in 5 drills will find nothing of value underground.

  4. Geothermal plants that have to deliver in megawatts need to have sufficient infrastructure to support the continuous mining and pumping of water and steam. Effectively, the total averaged cost comes to $4 per watt – significantly higher than the cost of other ASEs

  5. The best geothermal sources are those near tectonic plates – any seismological activity will pose a significant risk to the plant and to those dependent on energy from the plant

  6. Underground air currents that are trapped in geothermal wells are released when the pockets are mined. These air currents are composed of 90 per cent methane, 5 per cent carbon dioxide, and other gases – these are greenhouse gases

  7. Releasing them into the atmosphere adds to global warming

  8. Removing them from under the soil destroys the soil composition and alters the ecosystem

  9. Since methane is lighter than air, the density of air above a geothermal power plant will be reduced, making the skies in that area unsafe for air travel


BIOFUELS

Biofuels are fuels that have some amount of carbon that recently originated from a carbon cycle, i.e. derived from an organic source

Biomass is solid biofuel and is derived from wood, sawdust, grass trimmings, domestic refuse, charcoal, agricultural waste, and dried manure. These products are compacted to increase their density and used as pellets which can be combusted.

Liquid biofuel includes methanol and ethanol. Ethanol is mixed with gasoline at 1:10 to increase the octane number of the fuel. Higher the octane number, the more the fuel can compress before detonating, the more energy is released per volume of fuel. Methanol can be used directly as engine fuel.

Biogases are those produced when organic matter is broken down by bacteria in the absence of oxygen. It mostly comprises of methane, carbon dioxide, hydrogen sulphide and siloxanes (compounds of carbon and silicon) – biofuels can be burnt to release about 19.7 megajoule per kilogram

DEMERITS

  1. Various environmental models have been discussed that illustrate the merits of biofuels, incl. high oil prices, poverty reduction potential, sustainable biofuel production, and low cost.

  2. All biofuels have a lower energy content than hydrocarbon fossil fuels – which means to produce the same amount of energy, a higher volume of biofuels will have to be used

  3. Methanol and ethanol are basic in nature and produce acidic contaminants upon combustion, which then corrode the valves and transmission ducts of the vehicle

  4. Methanol is hygroscopic – it absorbs moisture directly from the atmosphere – and so dilutes itself if not handled properly. This also increases the wetness of by-products of methanol combustion

  5. Even though biofuels produce no smoke when combusted, they contain more than 20 times as much greenhouses gases as fossil fuels – which means they will contribute more to global warming than the fuels they replace

  6. In order to produce larger quantities of biofuels, larger quantities of resources are necessary

  7. More water is needed

  8. More land is needed

  9. Increase in biofuel production will place some stress on agricultural output and water resources, resulting in an increase in the prices of vegetables, etc.

  10. Volatile organic compounds present in biogas, upon exposure to sunlight, react with atmospheric nitrogen to form tropospheric ozone, peroxyacyl nitrates and nitrogen dioxide – this miasma is commonly called a photochemical smog and causes emphysema, bronchitis and asthma


HYDROGEN

Hydrogen is the lightest element known to man and comprises approximately 75 per cent of the known Universe

Hydrogen is not a source of energy, like coal or the sun, but a carrier of energy, like light and electricity

The source of hydrogen’s energy comes form its extremely acidic nature and the way it explosively combines with oxygen to form water vapour

DEMERITS

  1. The catalysts required to break down hydrogen, platinum and zirconium, are extremely expensive – an industrial alternative is to compress water to extremely high pressures and send an electric current through it and break it down into H and O – in this case, the compressor requires large amounts of energy

  2. Hydrogen costs $4 per kilogram at its purest and $1.40 per kilogram when it is derived from natural gas

  3. Once hydrogen has been obtained, it can be stored, transported and recombined at another location to yield large amounts of energy.

  4. In its natural gaseous form, every kilogram of hydrogen occupies an 89-litre tank – which is comparable to the fuel tank of a large truck

  5. Hydrogen can be compressed and liquefied to a liquid form, but an onboard cryogenic storage unit will consume large amounts of power.

  6. Hydrogen storage tanks have to be significantly stronger, and heavier, than normal tanks because high-pressure H has a tendency to corrode metals and leak into the atmosphere, where it explodes in contact with air