Pages

Showing posts with label Energy. Show all posts
Showing posts with label Energy. Show all posts

Thursday, 8 December 2011

The non-option of going back in time

The recent, and ongoing, debates on the merits of nuclear power, and whether we need it at all of it's just a motivated laziness in looking for alternate sources of energy (ASEs), has prompted me to evaluate the necessity of technology in life as we know it. In fact, to take the discussion further, I want to know if life as we know it is just an epochal assessment or if life as we know it has become a template for future generations to base their functions on.

[caption id="" align="aligncenter" width="300" caption="The match-cut from '2001: A Space Odyssey' spanning four million years: we're somewhere in between and wondering."]The Earth flag is not an official flag, since ...

The penetration of technology is indubitable and has, in many ways, become irredeemable. While many suggest that owing to the amount of environmental degradation the time has come for us to slow down, reevaluate our needs and, if possible, turn back from here to a time when life seemed more sustainable, I believe that in the 20th century, we definitely hit a point of no return. There's no turning back from here. That means we no longer do technology any justice by referring to it as something that has penetrated into our lives, we do it no justice by referring to it as a tool. We ARE what technology is. Even if it wasn't nuclear power that merited this reflection, it would've been something else. Life as we know it, more than anything else, is not sustainable. It never was; it probably never will be.

Probably.

If we rewinded to a time before the internet, before the computer and the transistors, before engines and hydraulics and electromagnetism, before astronomy, geology, meteorology, exploration and trade, before literature, production and communication, we arrive at a point where there was nothing behind us, a point of "zero history". It is when we move back to this point that we truly see where some of us aspire to return to. There was nothing before us to aid us in our future quests except the human body, the then-indecipherable forces of nature, and the human mind: a vast reserve of questions, extremely limited resources, and a helluva lot of time.
[/caption]

When we started asking those questions, when we started to explore farther into the fog of war, we hit the future. Since then, we haven't turned back because there was nothing to go back to. Everything was an improvement, everything that contributed something to the human condition and alleviated the pains of not-knowing. Do you think it would be better for us if we moved toward a tribal way of life? No way. Sure, we could make love to the environment and not be afraid of nature turning against us in unimaginable ways, but at some point, our basic instincts will take over. They always have, and they always will, too.

The tribes that we observe living in forests and valleys today seem to present a solution to us because we've hit a wall with our energy resources and don't know where to go to from that point on. But without us, without the restrictions and the hindrances we pose to the sustenance of their livelihood, tribes would've become quite something else by now. Even they would've evolved technologically, and invented their own methods of acquiring more knowledge and using it to their benefit. They live in a controlled environment, fighting to remain what they've remained as for the past six millennia. If we all retrogressed to that stage, we'd have gone back a few thousand years back in time, but we'd begin again.

The reason we look to such "reduced" ways of life—reduced by the various techniques at our disposal today—is because we are panicking. We are finally realizing that we, as humans, are unsustainable, and we're ready to take desperate measures to assuage that thought. We are looking at what is not us and joining the dots to give absolute freedom and sustainability. However, in effect, we are bound to give ourselves only the curse of changelessness. All that we have done as humans, all that we have explored and reared and produced, will then lie as waste. Let me tell you, it is easy to join the dots, but that doesn't mean the image will then come to life. There is a lot that we're missing out, perhaps because we're taking them for granted.

Yes, our energy needs are growing. For six thousand years, we've been asking questions and answering them, and this is the point we've come to. I'm not advocating that in that pursuit, we lay to waste all that crosses our paths; no. I'm only saying the answer to our energy needs isn't regression, isn't the reevaluation of everything that came before us. If anything, the notion of future has made us understand that anything is possible, and if something doesn't seem to work out, then we haven't looked hard enough. Simply because our options are significantly unviable ASEs, nuclear energy, thermal power plants and regression doesn't mean we pick regression: we pick what will sustain us in the short-run so that it can power the ideas that will be necessary for the long-run. If we're working to prolong something that wasn't born with the universe, we will take a hit. Let's last it out, not back down. There's a difference.

Monday, 5 December 2011

Another look at the displacement v. development debate



What is development?

I would interpret all development as a biological process, one that mimics natural processes. In this framework, development is the process through which each system attains maturity, a position of sufficiency and sustainability.

Maturity in whose interest? In the interests of the people? That is a secular definition that assumes everybody has all kinds of information required to make the decision. Therefore, development comes down to transparency that enables as well as reflects informed decision-making. I say secular because in such a definition of development, it becomes easy to exclude a world of foreign nations, which is not the case.

Consequently, a nation’s developmental trajectory is dictated significantly by foreign interests. In my opinion, thus, the developmental mechanism must work toward increasing the regional purchasing power so that our say in the international trade arena holds more weight.

The problem

Displacement versus development is a problem particularly difficult to explore because, beyond the initial recognition of the government’s role as being “villainous”, there is not much that has been said or done to establish a line beyond which displacement becomes simply necessary. The pursuit of developmental goals by the Indian government has resulted in over 21.3 million being displaced internally according to a 1998 report by the Indian Social Institute. Almost 5.8 per cent, or 1.25 million, of them have been displaced by the establishment of industrial estates1.

Development seems like a two-faced engine: (1) it works toward increasing our purchasing power regarding resources we may need in the future and (2) it consumes important resources required for its sustenance that the nation may not be in a position to provide. For instance, the proposed installation of two nuclear reactors at Kudankulam will generate 2,000 MW of power, which is over and above Tamil Nadu’s shortage (659 MW). Apart from this, another 11,540 MW is to be added between 2012 and 2016 with the installation of 17 power plants around the state, eight of them having already been sanctioned2.

Now, as environmentalists and others in the area protest against constructing a plant that poses significant risks to the livelihoods of residents, a question that goes under-attended is that of necessity: which are the quarters that require this 12,881 MW(the total 13,540 MW minus the deficit) of energy? What has been done, either by private parties or the government, to determine if such demands are legitimate? We must ask why Kudankulam and why not in Chennai, whose population actually stands to benefit from the operation of the reactors. What else is being done to whittle down unnecessary losses that have resulted in such huge power shortages?

In light of such seeming discrimination, the problem statements are two.

  1. Why is the government intent on displacing those peoples who are poised to benefit the least from the projects that displace them?

  2. Can development become necessary under any conditions and assume precedence over displacement?


The real villain, the real villainy

In India, I believe that the problem has never purely been one of displacement v. development, even though the conflict between those who champion either cause is what birthed it. According to Dr. Prabhat Patnaik, one of India’s leading economists, the central government’s adoption of a capitalist economic model will produce problems unique to the country because capitalism is being superimposed on an “old order” that hasn’t fully been dismantled. This is where the adivasi resides.

  1. Developmental cap - When it comes to the displacement of adivasis, a symbolic cap is necessary to discourage the government from favouring one section of the population more than the other, in the process expanding a gap that it has been elected to work against! Since government accountability for state-instituted displacement, especially as a result of land acquisition, is virtually absent, there is nothing to prevent the problem from worsening except an undertaking on the state’s part.

    It must undertake to frame its growth rate according to a plebiscit decreed by the people in the absence of congressmen and patricians at the order of a magistrate or a tribune, and it must work with the plebiscit as a strict guideline. This is similar to what has already been mandated by the new Land Acquisition and Rehabilitation and Resettlement (LARR) Bill, 2011.

  2. Consumption cap - If, at the policy level, a framework can be built that prioritizes the rights of the Dongria Kondh over the demand for bauxite mined from the Niyamgiri Hills in Orissa, what are the processes on the other side of the wall that will determine how much we reduce our consumption of aluminium by? Because we cannot plug a hole at one end of the pipe and expect the water levels at the other end to not rise up.


Conclusion

Yes, someone must suffer a loss of some kind for the sake of development, but instead of sustaining growth rates at precarious costs, why isn’t there any faith in the aspirations of the adivasis themselves? Why is it that adivasis are being targeted again and again instead of the state inculcating a program that will successfully reintegrate them into modern society, moderate growth rates to reasonable levels, and ensure that the notion of progress is no longer mired in controversy?

I hold that integration into modern society is more desirable than tribalizing the modern society because of the mechanisms we require to remain on a competitive note with our traders and strategic partners. At the same time, there is no doubt that we have lost control over our own internal homogenization. It is true that our first duty is toward ourselves, but what it has been lost to requires a look at India’s foreign relations and political history, which possesses its own special share of flaws. However, regression in the post-globalization age would be incredibly painful.

My answer is that under some circumstances, displacement could become simply necessary. While, there can be no excuse for the government to snatch land from its citizens for any use, except those expediently required (such as national defence, etc.), that is a matter of enforcement: without rehabilitation and resettlement, it is just unconstitutional.

It would be more harmful for the Indian economy, and the important subsidies it provides that sustains large sections of the rural population, in the short- and long-terms to cease its current trade partnerships and other strategic commitments than to resist globalization and modernization and start on a path toward complete self-sufficiency. The only leeway on top of it is that we need an equitable distribution of development because only that promises the convergence of globalization with our social harmony to some extent.

Sources

  1. http://www.fmreview.org/FMRpdfs/FMR08/fmr8.9.pdf

  2. http://www.tn.gov.in/policynotes/pdf/energy.pdf


Sunday, 28 August 2011

Black holes and information theory

When you see a star in the night sky and think to yourself of its beauty, you're doing the following things.

  1. Deciding to look up at the night sky

  2. Firing a sequence of signals through the motor neurons in the central nerve system

  3. Powering up muscles and lifting bones

  4. Positioning your eye to receive optimal amounts of light

  5. Employing a photo-chemical reaction at the back of the retina

  6. Sending electric signals to the brain

  7. Evaluating the beauty by accessing your memory

  8. Understanding the beauty and feeling it by letting it manifest as a suitable configuration of muscle positions


One way or another, we're using up energy to receive information, process it and convert it into another form which we can use. In fact, even the information we receive is a form of energy. When you log in to access your Facebook account, the letters and numbers on the screen are digitized data, and that means they're a series of data modification pulses shipped in through hundreds of optical cables, electronic circuitry and wireless data transmission systems to appear on your screen. Every physical manifestation of intention and will, in this world, is the conversion of energy from one form into another.

Now, the law of conservation of energy states that the total amount of energy in this Universe is fixed and cannot ever be changed. In that case, shouldn't the amount of information we receive and generate be fixed? At any point of time, would it be possible to generate more information than this Universe can contain? And since this Universe seems equipped only to contain so much information, is it fair to consider either that humankind's capabilities are limited or that humankind will never have to worry about that limit because they won't get there?

Second: because of the nature of the origin of this Universe, it is assumed to be a constantly expanding volume of space, and so, the amount of information that the Universe can contain also increases with it. What is the rate of increase, then? A simple answer to this question can be arrived at by considering two concepts: the Kepler problem in general relativity and black holes.

A black hole is a singularity. A singularity is a point in space where the quantities used to measure space-time change in a way that is independent of the coordinate system. Imagine just having neatly pressed your new sheets, and then finding somewhere a small crease that you aren’t able to iron out, as if it wasn’t there at all when you last looked there, but now won’t go away however hard you try. That’s your black hole.

[caption id="attachment_20246" align="aligncenter" width="545" caption="Depiction of a black hole"][/caption]

A black hole is a point in space, whereas the large black spheres we imagine them to be are true if only because that is the region around which the black hole exerts its influence. This “sphere of influence” doesn’t have a definite boundary. For the sake of convenience, which physicists also see the need for from time to time, there’s the event horizon: a hypothetical sphere whose surface marks the point of no-return.

Because of there massive densities, black holes exert a gravitational force that is just as strong as they are dense. Now, the Standard Model of particle physics dictates that photons, the packets of energy that carry electromagnetic radiation like light, have no mass. The most important consequence of masslessness is non-conformance to the forces of gravity, and that means light should be able to pass through black holes with no reflection, refraction or absorption. However, that’s not the case; in fact, black holes swallow light completely and burp a small amount of heat out. This happens because, instead of bending rays of light into themselves, black holes distort the shape of the space-time continuum itself.

Now, imagine a long stretch of flat land parallel to the surface of which a missile is fired. The missile is such that it guides itself to stay 1m above the ground at all points of time. If, suddenly, a gorge opens beneath the missile, it dips down and continues to follow the surface. Light behaves like the missile when the ground is the space-time continuum, and the only known phenomenon capable of distorting the continuum like that is a black hole – the only difference is that a black hole wraps the continuum around itself.

Now, this distortion lends itself to a very useful technique with which to measure the rate of expansion of the Universe, a technique called gravitational lensing. Consider the animated image below.



Beams of light coming from the galaxy in the background are bent around the black hole. As a result of this bending, two consequences arise:

  1. Increase in the illumination and the size of the image

  2. Apparent change in the position of the source of the image (irrelevant for this discussion)




During lensing, the distance traveled by a beam increases while the net displacement doesn't change, thereby keeping the colours of the image intact but changing its brightness and dimensions. Therefore, the black hole behaves like a convex mirror or, more commonly, a fish-eye lens. Now, the Kepler problem in general relativity gives rise to the formula:

θ = (4GM / rc2)

Here, θ is the angle through which the light beam is deviated by the bending object (as depicted in the latest image), G is the universal gravitational constant, M is the mass of the bending object, r is the distance between the beams and the bender (between "missile" and "ground"), and c is the speed of light (note how the forces of gravity are not instantaneous but also travel at the speed of electromagnetic radiations).

Think of the space-time continuum as an elastic fabric and the various phenomena and objects as special designs on its surface. When, at a point, the weave is wrapped around a spherical object, the surface area at that point goes from being flat to being rounded, giving rise to a bulge that enlarges the image. In physical terms, the angular size of the image is said to have been increased.

By calculating the value of θ, the distance between the galaxy and the black hole can be established. Over the course of, say, one year, the initial and final values of θ between two objects can be computed to give the distance by which they have been moved apart in that year. The perfect way to understand is the "raisin bread" model of the Universe. Consider a loaf of bread embedded with raisins. If the loaf is expanded, the number of raisins and the shape of the raisins do not change, but the distance between them changes proportionately.

With that, we are in a position to understand by how much the information-carrying capacity of the Universe changes as it itself becomes more voluminous. That leaves a third and last question to be answered, the question of information transfer.

Information transfer



In the process flowchart shown above, an idea is encapsulated by certain words that are then transmitted by an "information generation" controller (which is duty-bound to filter out noise and other such chaotic elements from the message). Next, the message is conveyed through a medium to an "information reception" controller (which is also duty-bound to filter out noise), and then the message is understood as an idea.

Now, while the message is being conveyed, certain errors and losses could creep in into the transmission process. For instance, because of the incapacity of the system itself, data loss and/or data corruption could occur. Further, if the right medium is not chosen for the most-efficient conveyance of the message, it might be understood differently, which is also a kind of data corruption. In order to avoid such possibilities, the message is sometimes amplified.

During amplification, two processes can be said to occur:

  1. The information being carried is modified: it is made stronger in order to survive the journey

  2. An accompanying signal is sent to the receiver to inform it about the changes that have been made


As such, the amplification and abridgment processes have to accompany the conveyance-medium-conveyance subsystem because they compensate for the shortcomings of the conveyances (just as if A lends money to B and B lends that amount to C, it is only B's responsibility to retrieve it from C and return it to A). That being said, let's move on to the idea of interstellar magnifiers.

Interstellar magnifiers

If human civilization were to spread out from Earth and distribute itself across other planets in other galaxies, communication between the various systems is bound to become a pain. In that case, suitably modified electromagnetic signals (such as light, RF waves, etc.) can be pulsed into the sky, aimed at objects with strong gravitational forces that would further amplify, or boost, them on their way. With the assistance of suitably positioned satellites, the broadcast information can then be "narrowcasted" down to the receiving stations on some planet.

[caption id="attachment_20245" align="aligncenter" width="350" caption="Cassini's gravity-assistance"][/caption]

A significant hindrance posed to this method of communication is a phenomenon called galactic extinction: when intercepted by a cloud of galactic dust, the waves are absorbed and scattered into space, the information becoming lost in the process. In order to minimize the amount scattered, polarized radiation may be considered.

The no-hair theorem

What happens when light, instead of bending around a black hole, enters into it? The answer to this question is a puzzler because of something called the no-hair theorem, which states that every black hole is characterized only by its mass, charge and angular momentum. That means that if my laptop flies into a black hole, no information about the laptop can be retrieved if the mass, charge and/or the angular momentum of the black hole is/are not changed! If you cannot open an invisible door floating around my room, and if I step inside the door someday, how will you find me?

If any mass enters a black hole and is swallowed, there should be a subsequent increase in the mass of the black hole. Astronomers observed that this didn't happen in the case of energy, which meant that just as the black hole consumed some energy, it must also have radiated some energy in order to maintain its overall state.

This radiation is called Hawking radiation (in honour of its discoverer), and later observations found that it lay in the thermal section of the electromagnetic spectrum, i.e. a black hole radiated heat more than anything else. And since the black hole radiated heat, it must slowly be losing energy and, at one point, must also completely evaporate without a trace left behind. Using equations available in the theory of relativity, it was found that smaller black holes evaporated faster than larger ones. In fact, a black hole with the mass of a car would evaporate in 10-88 seconds.

[caption id="attachment_20247" align="aligncenter" width="545" caption="Hawking radiation mechanism"][/caption]

After complete evaporation... what about my laptop? Ultimately, we have a paradox: if my laptop went into the black hole, which then burped out some heat in the form of Hawking radiation, then is the no-hair theorem violable? Because my laptop's mass has caused a change in the interior energy of the black hole, which shouldn't happen according to the theorem.

Is the information then lost forever? Not possible; if it is, then the law of conservation of energy stands violated. Does the information also evaporate during the evaporation of the black hole? Not possible; if it did, then Hawking radiation would become inexplicable. Does the information jump out during the last moments of evaporation? Not possible; if it did, then a smaller black hole must have held on to that information that didn't participate in the evaporation. Does the information slip into a baby universe that we can't interact with? If your imagination can understand the creation of such a universe, sure. Does the information get lost in the time dimension? Nah.

Where is the information, then?

Black holes and information theory

When you see a star in the night sky and think to yourself of its beauty, you're doing the following things.

  1. Deciding to look up at the night sky

  2. Firing a sequence of signals through the motor neurons in the central nerve system

  3. Powering up muscles and lifting bones

  4. Positioning your eye to receive optimal amounts of light

  5. Employing a photo-chemical reaction at the back of the retina

  6. Sending electric signals to the brain

  7. Evaluating the beauty by accessing your memory

  8. Understanding the beauty and feeling it by letting it manifest as a suitable configuration of muscle positions


One way or another, we're using up energy to receive information, process it and convert it into another form which we can use. In fact, even the information we receive is a form of energy. When you log in to access your Facebook account, the letters and numbers on the screen are digitized data, and that means they're a series of data modification pulses shipped in through hundreds of optical cables, electronic circuitry and wireless data transmission systems to appear on your screen. Every physical manifestation of intention and will, in this world, is the conversion of energy from one form into another.

Now, the law of conservation of energy states that the total amount of energy in this Universe is fixed and cannot ever be changed. In that case, shouldn't the amount of information we receive and generate be fixed? At any point of time, would it be possible to generate more information than this Universe can contain? And since this Universe seems equipped only to contain so much information, is it fair to consider either that humankind's capabilities are limited or that humankind will never have to worry about that limit because they won't get there?

Second: because of the nature of the origin of this Universe, it is assumed to be a constantly expanding volume of space, and so, the amount of information that the Universe can contain also increases with it. What is the rate of increase, then? A simple answer to this question can be arrived at by considering two concepts: the Kepler problem in general relativity and black holes.

A black hole is a singularity. A singularity is a point in space where the quantities used to measure space-time change in a way that is independent of the coordinate system. Imagine just having neatly pressed your new sheets, and then finding somewhere a small crease that you aren’t able to iron out, as if it wasn’t there at all when you last looked there, but now won’t go away however hard you try. That’s your black hole.

[caption id="attachment_20246" align="aligncenter" width="545" caption="Depiction of a black hole"][/caption]

A black hole is a point in space, whereas the large black spheres we imagine them to be are true if only because that is the region around which the black hole exerts its influence. This “sphere of influence” doesn’t have a definite boundary. For the sake of convenience, which physicists also see the need for from time to time, there’s the event horizon: a hypothetical sphere whose surface marks the point of no-return.

Because of there massive densities, black holes exert a gravitational force that is just as strong as they are dense. Now, the Standard Model of particle physics dictates that photons, the packets of energy that carry electromagnetic radiation like light, have no mass. The most important consequence of masslessness is non-conformance to the forces of gravity, and that means light should be able to pass through black holes with no reflection, refraction or absorption. However, that’s not the case; in fact, black holes swallow light completely and burp a small amount of heat out. This happens because, instead of bending rays of light into themselves, black holes distort the shape of the space-time continuum itself.

Now, imagine a long stretch of flat land parallel to the surface of which a missile is fired. The missile is such that it guides itself to stay 1m above the ground at all points of time. If, suddenly, a gorge opens beneath the missile, it dips down and continues to follow the surface. Light behaves like the missile when the ground is the space-time continuum, and the only known phenomenon capable of distorting the continuum like that is a black hole – the only difference is that a black hole wraps the continuum around itself.

Now, this distortion lends itself to a very useful technique with which to measure the rate of expansion of the Universe, a technique called gravitational lensing. Consider the animated image below.



Beams of light coming from the galaxy in the background are bent around the black hole. As a result of this bending, two consequences arise:

  1. Increase in the illumination and the size of the image

  2. Apparent change in the position of the source of the image (irrelevant for this discussion)




During lensing, the distance traveled by a beam increases while the net displacement doesn't change, thereby keeping the colours of the image intact but changing its brightness and dimensions. Therefore, the black hole behaves like a convex mirror or, more commonly, a fish-eye lens. Now, the Kepler problem in general relativity gives rise to the formula:

θ = (4GM / rc2)

Here, θ is the angle through which the light beam is deviated by the bending object (as depicted in the latest image), G is the universal gravitational constant, M is the mass of the bending object, r is the distance between the beams and the bender (between "missile" and "ground"), and c is the speed of light (note how the forces of gravity are not instantaneous but also travel at the speed of electromagnetic radiations).

Think of the space-time continuum as an elastic fabric and the various phenomena and objects as special designs on its surface. When, at a point, the weave is wrapped around a spherical object, the surface area at that point goes from being flat to being rounded, giving rise to a bulge that enlarges the image. In physical terms, the angular size of the image is said to have been increased.

By calculating the value of θ, the distance between the galaxy and the black hole can be established. Over the course of, say, one year, the initial and final values of θ between two objects can be computed to give the distance by which they have been moved apart in that year. The perfect way to understand is the "raisin bread" model of the Universe. Consider a loaf of bread embedded with raisins. If the loaf is expanded, the number of raisins and the shape of the raisins do not change, but the distance between them changes proportionately.

With that, we are in a position to understand by how much the information-carrying capacity of the Universe changes as it itself becomes more voluminous. That leaves a third and last question to be answered, the question of information transfer.

Information transfer



In the process flowchart shown above, an idea is encapsulated by certain words that are then transmitted by an "information generation" controller (which is duty-bound to filter out noise and other such chaotic elements from the message). Next, the message is conveyed through a medium to an "information reception" controller (which is also duty-bound to filter out noise), and then the message is understood as an idea.

Now, while the message is being conveyed, certain errors and losses could creep in into the transmission process. For instance, because of the incapacity of the system itself, data loss and/or data corruption could occur. Further, if the right medium is not chosen for the most-efficient conveyance of the message, it might be understood differently, which is also a kind of data corruption. In order to avoid such possibilities, the message is sometimes amplified.

During amplification, two processes can be said to occur:

  1. The information being carried is modified: it is made stronger in order to survive the journey

  2. An accompanying signal is sent to the receiver to inform it about the changes that have been made


As such, the amplification and abridgment processes have to accompany the conveyance-medium-conveyance subsystem because they compensate for the shortcomings of the conveyances (just as if A lends money to B and B lends that amount to C, it is only B's responsibility to retrieve it from C and return it to A). That being said, let's move on to the idea of interstellar magnifiers.

Interstellar magnifiers

If human civilization were to spread out from Earth and distribute itself across other planets in other galaxies, communication between the various systems is bound to become a pain. In that case, suitably modified electromagnetic signals (such as light, RF waves, etc.) can be pulsed into the sky, aimed at objects with strong gravitational forces that would further amplify, or boost, them on their way. With the assistance of suitably positioned satellites, the broadcast information can then be "narrowcasted" down to the receiving stations on some planet.

[caption id="attachment_20245" align="aligncenter" width="350" caption="Cassini's gravity-assistance"][/caption]

A significant hindrance posed to this method of communication is a phenomenon called galactic extinction: when intercepted by a cloud of galactic dust, the waves are absorbed and scattered into space, the information becoming lost in the process. In order to minimize the amount scattered, polarized radiation may be considered.

The no-hair theorem

What happens when light, instead of bending around a black hole, enters into it? The answer to this question is a puzzler because of something called the no-hair theorem, which states that every black hole is characterized only by its mass, charge and angular momentum. That means that if my laptop flies into a black hole, no information about the laptop can be retrieved if the mass, charge and/or the angular momentum of the black hole is/are not changed! If you cannot open an invisible door floating around my room, and if I step inside the door someday, how will you find me?

If any mass enters a black hole and is swallowed, there should be a subsequent increase in the mass of the black hole. Astronomers observed that this didn't happen in the case of energy, which meant that just as the black hole consumed some energy, it must also have radiated some energy in order to maintain its overall state.

This radiation is called Hawking radiation (in honour of its discoverer), and later observations found that it lay in the thermal section of the electromagnetic spectrum, i.e. a black hole radiated heat more than anything else. And since the black hole radiated heat, it must slowly be losing energy and, at one point, must also completely evaporate without a trace left behind. Using equations available in the theory of relativity, it was found that smaller black holes evaporated faster than larger ones. In fact, a black hole with the mass of a car would evaporate in 10-88 seconds.

[caption id="attachment_20247" align="aligncenter" width="545" caption="Hawking radiation mechanism"][/caption]

After complete evaporation... what about my laptop? Ultimately, we have a paradox: if my laptop went into the black hole, which then burped out some heat in the form of Hawking radiation, then is the no-hair theorem violable? Because my laptop's mass has caused a change in the interior energy of the black hole, which shouldn't happen according to the theorem.

Is the information then lost forever? Not possible; if it is, then the law of conservation of energy stands violated. Does the information also evaporate during the evaporation of the black hole? Not possible; if it did, then Hawking radiation would become inexplicable. Does the information jump out during the last moments of evaporation? Not possible; if it did, then a smaller black hole must have held on to that information that didn't participate in the evaporation. Does the information slip into a baby universe that we can't interact with? If your imagination can understand the creation of such a universe, sure. Does the information get lost in the time dimension? Nah.

Where is the information, then?

Monday, 15 August 2011

Science that's not everyday stuff

Stuff I'm working/reading/solving problems on.

Ferrofluids

A ferrofluid is a colloidal solution of nanoparticles in a carrier fluid. A colloid is a "mixture" of certain extremely small particles that are equally distributed inside a fluid, and in this case, the particles are a few nanometres across and are ferromagnetic (i.e., attracted by and magnetizable by magnets).

When an external magnetic field is applied across this ferrofluid, the nanoparticles begin to clump together because they become magnetized and the magnetic force begins to draw them together. In order to "declump" the particles once the field is switched off, the particles are given a thin coating of a surfactant (like sodium citrate).

With precisely controlled fields, ferrofluids assume beautiful formations and arrangements in space. The primary use of this fluid is in two industries:

  1. X-ray spectroscopy - The principle purpose of this device is to resolve high-energy electromagnetic radiation, and for that, it must be remain extremely stable during operation. Refrigerators built with ferrofluids are called adiabatic demagnetization refrigerators (ADR) and their working temperatures are between 0 K and 100 mK, and provide the low temperatures required for stability.

  2. Armour - Battle tanks have moved on from possessing just one thick layer of metallic armour to two layers sandwiching a second material. This material is a ferrofluid. During an explosion in the immediate surroundings of a tank, the first layer presses down on the ferrofluid. Because of the increased pressure, the nanoparticles in the fluid clump together and increase the density and the viscosity of the fluid (the more viscous a fluid is, the less easily it flows). This makes it more hardy and imparts an increased resistance to let the shockwaves from the blast penetrate the second layer of metal.


Once the pressure subsides, the fluid becomes less viscous and, more importantly, less dense. This decrease in density is important for the vehicle to retain its working brake-horsepower.

Technical illustrations

Illustrating for science is tricky business. At first glance, the proportions have to be precise; I've known whole projects that had to be abandoned because someone got the metric wrong by 5% or less. Once you begin to scrutinize the images, it also becomes evident that the generation of a visual stimulus has to be carefully manipulated in order to evoke certain associations.

For example, consider three objects, A, B and C, within a frame. If A and B are considered to be more important than C is, then using the same colour to highlight A, B and C would make that distinction harder to grasp. Of course, the demarcation could be invoked using other methods, but why bother when colour is so easily represented and accessible? Using blue for A and B and grey, a slightly more muted colour, for C establishes distinction, significance and association all in one.

My last comment is on the object itself: if A is a kind of lizard, then showing it from the top or the side won't make the same impact as would its depiction in a characteristic pose, like preparing itself to lunge at prey.

[caption id="attachment_20186" align="aligncenter" width="357" caption="Association by posture, hue and position (Source: Wikimedia Commons)"][/caption]

Large Hadron Collider

According to the Standard Model of particle physics, the Universe is composed of leptons, hadrons, gluons, bosons and quarks. Leptons are light, hadrons are heavy, gluons are sticky, bosons are bossy and quarks are weird. At least, those were what physicists thought necessary until Einstein came along and asked, "What makes things heavy?" A clever mathematician by the name of Peter Higgs (amongst others) solved Einstein's equations for general relativity and in his solutions, proposed an elementary, hypothetical particle called in his honour today as the Higgs boson, and said it "gave" everything mass.

When the Big Bang happened all those years ago, the Higgs boson is thought to have formed as a result of the extreme pressure and temperature. Because of its unstable nature, it quickly decayed, but not before mediating the gravitational force between the other particles that were beginning to form, thereby giving them mass.

The Large Hadron Collider (LHC) at CERN is the largest science experiment in history, and has been built for the sole purpose for recreating the conditions of the Big Bang so that another Higgs boson may form. Since each particle has a distinct decay pattern, detectors, censors and other data acquisition devices have been mounted over certain sections of the LHC to quickly capture the energy signature of a decaying Higgs. If that happens, the only thing particle physics will have left to explain is dark matter.



These sections where the detectors are mounted are where the protons (or, Hydrogen nuclei) are going to be smashed together 40 million particles per second at speeds approaching that of light. In fact, data has emerged that speeds of 0.99c have been attained, which means the particles were each traveling at 296,794.5 km/s. That's 11 times around the earth in a second. At such speeds, the mass of the particles climbs monstrously, and the temperatures at the time of the smash beat the temperature of a billion suns.

Awesome.

Tsar Bomba

Tsar Bomba is the strongest man-made nuclear weapon to be detonated in the history of mankind. Imagine the amount of energy released by every bullet, missile, grenade, bomb, shell, flamethrower, chemical and reaction in World War II (incl. Little Boy and Fat Man), sum them up, and understand that Tsar Bomba's yield beat it by 10 times.

Conceptualized by scientists and constructed by engineers of the Soviet Union through 1960 and 1961, the bomb was originally supposed to have a yield of 100 MT. However, during the testing phase, it was found that any suitable site that would be exposed to such a fallout was populated by Soviet citizens. Consequently, the yield was reduced to 50 MT, and is considered to have been the cleanest explosion in history (relative to its yield).

Tsar Bomba was a 3-stage explosive:

  1. The first stage was a fission reaction. During a fission reaction, the nucleus of an atom splits into smaller parts, release free neutrons, photons and gargantuan amounts of energy. Inside a cavity within the fissioning material, the second stage is placed.

  2. The second stage was a small fusion reaction. During a fusion reaction, two or more nuclei fuse to form a larger nuclei, releasing such amounts of energy as to dwarf a fission reaction. The fusion reaction releases energy when the participating nuclei have individual masses less than the atomic mass of iron, and absorbs otherwise. Inside a cavity within the fusing material, the third stage is placed.

  3. The third stage was a large fusion reaction. Identical in every way but in quantity to the second stage, the third stage contained larger numbers of nuclei waiting to fuse. As the first stage went off, nuclei in the second stage become heated and compressed to a tortuous extent, generating the critical mass required for the first, smaller fusion reaction to commence. As that happened, the energy from it generated the critical mass for the final stage to go off.


In order to reduce the yield from 100 MT to 50 MT, a reexamination of the fusion tampers was required. Between the first and second stages and the second and third stages, something called a tamper was used to accelerate the fission process. As the fission reaction subsided and the first fusion stage took off, free neutrons released would collide with the tamper, made of uranium-238, and set of a fast fission reaction. This result was provided for to enhance the yield of the bomb. When reducing the yield became necessary, engineers removed the uranium tamper and replaced it with one made of lead.

The lead trapped the free neutrons. Fusion reactions took over. Fast fission became prohibited. Game over.

Science that's not everyday stuff

Stuff I'm working/reading/solving problems on.

Ferrofluids

A ferrofluid is a colloidal solution of nanoparticles in a carrier fluid. A colloid is a "mixture" of certain extremely small particles that are equally distributed inside a fluid, and in this case, the particles are a few nanometres across and are ferromagnetic (i.e., attracted by and magnetizable by magnets).

When an external magnetic field is applied across this ferrofluid, the nanoparticles begin to clump together because they become magnetized and the magnetic force begins to draw them together. In order to "declump" the particles once the field is switched off, the particles are given a thin coating of a surfactant (like sodium citrate).

With precisely controlled fields, ferrofluids assume beautiful formations and arrangements in space. The primary use of this fluid is in two industries:

  1. X-ray spectroscopy - The principle purpose of this device is to resolve high-energy electromagnetic radiation, and for that, it must be remain extremely stable during operation. Refrigerators built with ferrofluids are called adiabatic demagnetization refrigerators (ADR) and their working temperatures are between 0 K and 100 mK, and provide the low temperatures required for stability.

  2. Armour - Battle tanks have moved on from possessing just one thick layer of metallic armour to two layers sandwiching a second material. This material is a ferrofluid. During an explosion in the immediate surroundings of a tank, the first layer presses down on the ferrofluid. Because of the increased pressure, the nanoparticles in the fluid clump together and increase the density and the viscosity of the fluid (the more viscous a fluid is, the less easily it flows). This makes it more hardy and imparts an increased resistance to let the shockwaves from the blast penetrate the second layer of metal.


Once the pressure subsides, the fluid becomes less viscous and, more importantly, less dense. This decrease in density is important for the vehicle to retain its working brake-horsepower.

Technical illustrations

Illustrating for science is tricky business. At first glance, the proportions have to be precise; I've known whole projects that had to be abandoned because someone got the metric wrong by 5% or less. Once you begin to scrutinize the images, it also becomes evident that the generation of a visual stimulus has to be carefully manipulated in order to evoke certain associations.

For example, consider three objects, A, B and C, within a frame. If A and B are considered to be more important than C is, then using the same colour to highlight A, B and C would make that distinction harder to grasp. Of course, the demarcation could be invoked using other methods, but why bother when colour is so easily represented and accessible? Using blue for A and B and grey, a slightly more muted colour, for C establishes distinction, significance and association all in one.

My last comment is on the object itself: if A is a kind of lizard, then showing it from the top or the side won't make the same impact as would its depiction in a characteristic pose, like preparing itself to lunge at prey.

[caption id="attachment_20186" align="aligncenter" width="357" caption="Association by posture, hue and position (Source: Wikimedia Commons)"][/caption]

Large Hadron Collider

According to the Standard Model of particle physics, the Universe is composed of leptons, hadrons, gluons, bosons and quarks. Leptons are light, hadrons are heavy, gluons are sticky, bosons are bossy and quarks are weird. At least, those were what physicists thought necessary until Einstein came along and asked, "What makes things heavy?" A clever mathematician by the name of Peter Higgs (amongst others) solved Einstein's equations for general relativity and in his solutions, proposed an elementary, hypothetical particle called in his honour today as the Higgs boson, and said it "gave" everything mass.

When the Big Bang happened all those years ago, the Higgs boson is thought to have formed as a result of the extreme pressure and temperature. Because of its unstable nature, it quickly decayed, but not before mediating the gravitational force between the other particles that were beginning to form, thereby giving them mass.

The Large Hadron Collider (LHC) at CERN is the largest science experiment in history, and has been built for the sole purpose for recreating the conditions of the Big Bang so that another Higgs boson may form. Since each particle has a distinct decay pattern, detectors, censors and other data acquisition devices have been mounted over certain sections of the LHC to quickly capture the energy signature of a decaying Higgs. If that happens, the only thing particle physics will have left to explain is dark matter.



These sections where the detectors are mounted are where the protons (or, Hydrogen nuclei) are going to be smashed together 40 million particles per second at speeds approaching that of light. In fact, data has emerged that speeds of 0.99c have been attained, which means the particles were each traveling at 296,794.5 km/s. That's 11 times around the earth in a second. At such speeds, the mass of the particles climbs monstrously, and the temperatures at the time of the smash beat the temperature of a billion suns.

Awesome.

Tsar Bomba

Tsar Bomba is the strongest man-made nuclear weapon to be detonated in the history of mankind. Imagine the amount of energy released by every bullet, missile, grenade, bomb, shell, flamethrower, chemical and reaction in World War II (incl. Little Boy and Fat Man), sum them up, and understand that Tsar Bomba's yield beat it by 10 times.

Conceptualized by scientists and constructed by engineers of the Soviet Union through 1960 and 1961, the bomb was originally supposed to have a yield of 100 MT. However, during the testing phase, it was found that any suitable site that would be exposed to such a fallout was populated by Soviet citizens. Consequently, the yield was reduced to 50 MT, and is considered to have been the cleanest explosion in history (relative to its yield).

Tsar Bomba was a 3-stage explosive:

  1. The first stage was a fission reaction. During a fission reaction, the nucleus of an atom splits into smaller parts, release free neutrons, photons and gargantuan amounts of energy. Inside a cavity within the fissioning material, the second stage is placed.

  2. The second stage was a small fusion reaction. During a fusion reaction, two or more nuclei fuse to form a larger nuclei, releasing such amounts of energy as to dwarf a fission reaction. The fusion reaction releases energy when the participating nuclei have individual masses less than the atomic mass of iron, and absorbs otherwise. Inside a cavity within the fusing material, the third stage is placed.

  3. The third stage was a large fusion reaction. Identical in every way but in quantity to the second stage, the third stage contained larger numbers of nuclei waiting to fuse. As the first stage went off, nuclei in the second stage become heated and compressed to a tortuous extent, generating the critical mass required for the first, smaller fusion reaction to commence. As that happened, the energy from it generated the critical mass for the final stage to go off.


In order to reduce the yield from 100 MT to 50 MT, a reexamination of the fusion tampers was required. Between the first and second stages and the second and third stages, something called a tamper was used to accelerate the fission process. As the fission reaction subsided and the first fusion stage took off, free neutrons released would collide with the tamper, made of uranium-238, and set of a fast fission reaction. This result was provided for to enhance the yield of the bomb. When reducing the yield became necessary, engineers removed the uranium tamper and replaced it with one made of lead.

The lead trapped the free neutrons. Fusion reactions took over. Fast fission became prohibited. Game over.