Pages

Showing posts with label science. Show all posts
Showing posts with label science. Show all posts

Tuesday, 1 May 2012

The fallacy of attaining infinity

I made a lot of trips to The Hindu office on Mount Road this past week, and I made all of them using the suburban railways. When standing still inside a train that's moving at around 80 km/hr, I'm also moving at the same speed in the same direction. When walking ahead inside the train, I'm moving faster than the train inside the train. When walking toward the back of the train, I'm moving slower than the train is. All this is boring relative-motion stuff. How about when I'm moving sideways inside the train?

When I'm moving sideways at a speed of, say, 2 m/min, the train will have moved forward a distance of 22.22 m in that same minute. If there were an imaginary path that I was inscribing on the ground, then my sideways-one won't be perpendicular to the train's path: it will be adjacent and separated by an angle (like in the diagram shown below).


In the diagram, b is 22.22 m long, a is 2 m long, C is 90°, and A is tan(a/b) = 0.0015°. Now, the time taken by the train to traverse 22.22 m is 1 s. Let's keep that fixed; instead, in that same second, let's move faster and faster from point A to point B (i.e., my sideways motion). If I move 3 m instead of 2, the angle A becomes 0.0022°. If I move 5 m, the angle's value climbs to 0.0039°. At some point, where the train's speed is too high, the value of A has to move toward around 0°, and c, toward b. In other words, if I move really fast along the breadth of the train and the train has also sped up to a great velocity, I can get from one side of the train to the other as if I simply vanished at this point and materialized at that.

For that to happen, let's make some hypothetical modifications to the train: let the breadth be 2 km instead of a few metres, and let it be accelerating toward around 2,000 km/hr. Assuming that at some point of time the train has stopped accelerating and attained a constant velocity of 2,000 km/hr, if I move 2 m sideways in 1 s, the value of A stands at 0.0000628° and c at 555.5536 m. To make c smaller, let's say the train has sped up to 2,100 km/hr and I move at 1 m/s. This makes A 0.0000298° and c, 583.3309 m. If I move so much as 0.1 m, A becomes 0.00000298° and c, 583.3300002 m. At this stage, A is as good as 0° and c almost equal to b.

For someone watching me move from inside the train, I will have moved sideways at 0.1 m/s. However, for someone looking at the imaginary path (i.e., from a relativistic reference frame), it will be non-existent! This is because I will have moved from A to B in a train so fast that my path will become a, as if there were two parallel lines (one beginning at A and the other at B) and I moved from one to the other along a path that is parallel to both lines. This situation is a mathematical improbability, and thus must correspond to an improbable assumption in the real-world. What is it?

The simplest wrong assumptions are always associated with almosts and nearlies. Saying 583.33 m is almost equal to be 583.3300002 m is different from saying 583.33 m is equal to 583.3300002 m. In the real world, for as long as we don't hit relativistic velocities (i.e., those close to that of light), there will always be these extremely small but furiously persistent inconsistencies - they might seem valid for mathematical and practical approximations but they will always translate into very real differences.

Friday, 13 April 2012

Science in India

What really is the attitude toward science in India?

In many of the other countries that do or don't have strong science programmes, the attitude toward science is well known. In China, for example, where the space programme is picking up well, high-speed railway lines are being built, and the annual investment in science has grown at more than 20 per cent annually since 2000 (now in the neighbourhood of $100 billion), there is open support for the cause of science and the role it must play in the country's development. In India, however, investments in R&D are half-hearted in that they don't enjoy or suffer either widespread support or cynicism. Moreover, where in most cases science funding is seen at least as a move toward indigenous military empowerment, India lacks that, too.

In a December 2010 report titled 2011 Global R&D Funding Forecast, going with a study sponsored by R&D Mag, India's share of global R&D spending is 3.0 per cent (0.80% of GDP), measly in comparison with the countries it is seen as competing with: America (34.0%), Japan (12.1%), China (12.9% = 1.44% of GDP), and Europe (23.2%). The immediate solution is definitely not to step up spending but to look at why a country that has used science to rise to where it is now is doing so without any support for it at the basic level, as if it sees science as a mere tool that will be dropped the moment its goals are achieved.

Looking at the status quo from a mediaperson's vantage point, a few habits come immediately to light. The first is a lack of outreach programmes by Indian science institutions. For a country brimming with engineers, there are too few fora that cater to the science-minded. On either sides of the locus charted by science-stream in classes XI and XII, engineering education in either the IITs or the NITs, and then a job with the engineering sector, there is no place to engage with scientists and technicians simply because one might enjoy interacting with them, find out more about what they do and how it is impacting the society at large. The one other place to do all this is from within media circles.

Even in the political sphere, there is abysmal engagement by the politicians with the people and vice versa at the scientific level. Granted, we are only now setting out on a path of getting as many people educated as possible through the means of reservations and constitutionally established compulsions. However, that does not mean there is nothing to look at higher up the pyramid: for becoming the focus of the world for its abundance of engineers and doctors, for launching manned missions to the moon in the near future, and for being at the forefront of nuclear science research, the most politicians are willing to talk about is shutting down crucial nuclear power plants.

[caption id="attachment_22951" align="aligncenter" width="540"] The Kudankulam Nuclear Power Plant[/caption]

Apparently, science has already assumed a degenerate form in the country, where it can be sidelined to accrue people-support ahead of the elections. Unfortunately, these are also some of the more easily-kept promises. Science often isn't public opinion, and there is a lot of work required in that direction to mend the people's idea of its importance and the roles it plays in shaping equanimous progress.

Still, where are the broader ambitions that politicians must have about safeguarding the nation's support in the field of cutting-edge physics? Where are the broader ambitions that address the country's role in nuclear non-proliferation (apart from when heads of state come visiting)? Where are the broader ambitions concerned with furthering nanotechnology research in the country in keeping with its growing domination as a centre for medical tourism?

In fact, let us not attend to such broad considerations now: a look at the attitude toward the IT-sector in South India should do. The most successful R&D contribution of J Jayalalithaa, the Chief Minister of Tamil Nadu, to date has been the setting up of IT parks in and around Chennai (a move borrowed suspiciously from the Hyderabad- and Bangalore-models without too much foresight). As the subsidization of IT products drew in a large volume of software engineers that led to a siphon effect, so also did focus shift away from other non-subsidized industries. Now, the Pallikaranai marshlands on which most of the IT offices are set up have taken a severe beating.

Why? Because we can't seem to understand the importance of a young man's or woman's employment in the same light as the importance of a healthy local ecosystem.

Those within the scientific community are no exception, either. Forget the science outreach programmes—they are only secondary considerations. Instead: where are the science magazines a la Scientific American? Don't Indians possess a tradition of invention and discovery dating back to about 4,000 years? What killed it, then? A couple of days ago, a friend of mine had a tough time locating doctors working on stem cell research in India because university websites were severely outdated! The popular opinion of the sports-and-political-news hegemon is that the paucity of media representation would have driven researchers to speak about their research with quite some zeal, but no. Even contacting a scientist has become a hassle.

[caption id="attachment_22958" align="aligncenter" width="540"] A good example of a science institution's website that goes nowhere is that of the Department of Biotechnology (affiliated with the Government of India)[/caption]

Moreover, the contactable ones are often tight-lipped when answering questions on studies done by them, and not necessarily on subjects that have debatable ethical concerns attached, such as soil sedimentation, state of plumbing, safety in power plants, metallurgy and materials engineering, greenhouse gas emissions, and renewable energy (quoting from experience). Have their ought-to-be profligate opinions dried up because of the subjects' misguided depiction in the media in the past? How do we fix it?

It is hard to imagine that the answer to these and such questions is colonialism because India's rapid rise to a position of power seems to have caused all the problems. For example, sustained mishandling of the planning, construction and operation of dams alone is sure to have dented rural India's idea of technology. Now, with a disturbing experience of the national government's contumacious attitude toward rural authority, we are obliged to push harder even for all-round legitimate projects. In fact, perspectives have turned so skewed that "all-round legitimacy" has become the rallying point for contention between environmentalists and any kind of developers. Now, you can't say "development" and not be expected to be tossed into a political maelstrom.

Circling back to the first point: what is the attitude toward science in India? The nation has enjoyed a pluralism of cultures, languages, and traditions for centuries now, and is it that science, too, is being granted that privilege? If you think that isn't too bad, think again: the thing about science that it always has one right answer, ergo there is always only one way to use it. Of course, the course of its action can be deftly regulated, but not to a point where many journalists don't or can't understand what science really is up to in the country.

Science is not the Big Dam, the Big Metro Line, the Big Power Plant that displaces thousands of people without sufficient recourse, that robs livelihoods and impregnates men and women with carcinogens, that is the call to arms of the poor against the rich. No!; science is now the helpless instrument in the hands of the short-sighted power-monger, and it must be removed from there. To do so, at least all that I have mentioned in this post must be fixed.

Sunday, 19 February 2012

Understanding accelerator luminosity

Advanced physics is essentially a study in precision, and the particle accelerators of today that are located at the cutting-edge Intensity and Energy Frontiers work against approximations everyday. The particles they synthesize, track and study are so small, quick and short-lived that they might as well have simply popped in and out of existence and nothing would've changed. However, fortunately, that's not the point of studying these things at all: understanding why the "popping" happens at all is what is key.

[caption id="attachment_21639" align="aligncenter" width="346" caption="Some famous accelerators: (clockwise from top-left) Kō Enerugī Kasokuki Kenkyū Kikō (KEK), Japan; Tevatron at FERMILAB; CERN's Large Hadron Collider; and LINAC at Stanford Linear Accelerator Centre."][/caption]

At the world's most powerful collider, the LHC at CERN, two proton beams are shot around 27-km long rings. These are not continuous beams but ones intermittently segregated into bunches, like a pulse. Each of these bunches contains 2,808 protons (which are the hadrons in question) and there are 1,000 bunches per beam. It is ensured that the bunches from the rings don't cross each other - "collide" - more than once every 25 nanoseconds. At this rate, 112.32 billion protons - 56.16 billion from each side - meet each other every second. This is what every particle accelerator makes possible: a rendezvous.

Once this is done, the detectors take over, and they are the real measure of an accelerator's performance. The accelerator will have ensured that enough collisions occur so that the detector can record at least one (even though I'm understating the ratio, it is really quite small). Ergo, to measure a detector's performance as either being good or bad, or perhaps even as somewhere in between in the rare case, how much it is capable of seeing is what makes the difference. This is where luminosity comes in.

The generic definition of luminosity is that it is a measure of the quantity of light that passes through an area each second, and so its units are per metre-squared per second. Accelerator physics adopted this definition and modified it a little: accelerator luminosity is a measure of the number of particles that pass through a given area each second multiplied by the opacity of the detector. This final parameter is necessary because it also accounts for the tendency of some particles to escape detection by passing right through the target: if the target's opacity is high, most particles will be "seen", and if it is low, most particles will be invisible to the cameras' eyes.

(Even though the definition of luminosity indicates the number of particles that pass through an area per second, its meaning in the confines of an accelerator changes: it is the number particles that are seen by a detector irrespective of how many particles there are in total.)

Inside the accelerator and in the presence of the detector, the following differential equation dictates the machine's luminosity:



Here, σ is the total cross section of the detector - the area that is exposed to and receives the stream of particles, N the number of particles, L the instantaneous luminosity, and t the duration over which the detector remains in operation. The opacity affects σ. (The 'd' denotes that the value of the parameter is being considered for an infinitesimal period of time, as indicated by the dt in the denominator. If it was dx or dy instead of dt, it would mean the value of N is being considered over a very small distance in the x or y direction.)

If Ω (omega) were the solid angle through which the detector's cross section was exposed, its differential cross section is computed as



 

This formula gives the luminosity with respect to the angular cross section (as opposed to a planar surface) as the number of particles per degree per second, and from here, the number of particles per volume of space can be easily computed. The formula also shows that the greater the detecting cross section per degree of solid angle, the greater the luminosity per degree of the same angle (or, "particle-seeability"). And for the detector to be useful at all, the instantaneous luminosity has to be high enough to detect particles so small that... well, they're incredibly small. Therefore, the smaller the particle being studied, the larger the detector will be.

There is no better way to illustrate this conclusion than to point, again, to the LHC, where the Higgs boson particle, one of the smallest particles conceivable, a veritable building block of nature, is being hunted by the world's largest detector (which also has a misleading name): the Compact Muon Solenoid (CMS). The CMS, weighing 12,500 tons, has been able to achieve an astounding integrated (as in not instantaneous) luminosity of 1 per femtobarn: 1 barn is one-hundred-billion-billion-billionth of a squared metre; 1 femtobarn is one-million-billionth of that!

[caption id="attachment_21633" align="aligncenter" width="461" caption="The total integrated luminosity delivered to and collected by CMS until 17th June, 2011."][/caption]

Another detector at the site, the much more prolific A Toroidal LHC Apparatus (ATLAS) weighs 7,000 tons and has a luminosity of 50 per femtobarn. The under-construction iron-calorimeter (ICAL) detector at the India-based Neutrino Observatory (INO) in Theni, Tamil Nadu, will weigh 50,000 tons after being completed in 2015 and will be used to track and study neutrinos exclusively. Neutrinos are particles more elusive than the Higgs, and, though the luminosity of ICAL hasn't been disclosed, we can expect the device to be one of the pioneers in detector technology simply because its luminosity must be that low for the project to be a success.

This much and more can be said of accelerator luminosity. While the media goes gaga over the energies at which the beams are being accelerated, there is a silent revolution in detector technology happening in the background, a revolution that is spawning brilliant techniques to spot the fastest, smallest and most volatile particles. These detectors also consume the greater part of accelerator budgets to build and the greater part of total maintenance time. Some of the most advanced detectors in existence include hadronic calorimeters (HCAL), ring-imaging Cherenkov detectors (RICH detectors) and muon spectrometers.

Saturday, 18 February 2012

The learner as far-seer

I recall a trivial incident from December 13, 2011, which was the day when the ATLAS and CMS experiments at the CERN's Large Hadron Collider (LHC) announced a possible sighting of the Higgs boson particle. It was not so much an incident as something I'm observing now: when the announcement was being made by Fabiola Gianotti, who's in-charge of ATLAS, I had to pause the live-streamed video once every few seconds to look up what she was talking about. A 20-minute-long presentation took more an hour to be understood.

The reason I remember the experience is that, more often than not, one doesn't know when the learning phase of life ends - many, like me, don't even know what comes after it if it ends. However, earlier today, when I was reading a journal article on using laser-induced plasma and the technology's application in particle accelerators, I surprised myself by understanding the entire thing without stopping even once; I could get what the authors were saying even when they were speaking only via formulae.

It was strangely dejecting because one of the most likeable things about particle physics in my opinion is its tendency to throw up previously unknown information just when we least expect it. In fact, even the one thing we thought we knew about this universe - the highest speed possible - was defied last year by some 15,000 neutrinos. And in such a scenario, when the picture suddenly becomes clear, when I can see the jigsaw puzzle board and the different empty shapes here and there waiting to be filled, it's as if I'm ready to start answering the bigger questions and leave the smaller ones behind.

[caption id="attachment_21624" align="aligncenter" width="334" caption="Clinton Davisson (left) and Lester Germer conducted an experiment since named after them - as the Davisson-Germer experiment - in 1927. Six years earlier, Einstein had won the Nobel Prize in physics for his discovery that particles were discrete encapsulations of energy called quanta. In 1927, French physicist Louis de Broglie presented his thesis that all particles have a wave-like characteristic. In the Davisson-Germer experiment, the two Americans stumbled across an electron diffraction pattern where they were expecting an electron diffuse-reflection pattern while studying the surface of nickel.  This proved de Broglie's informed conjecture true."][/caption]

I feel like the depressed man whose psychiatrist suggests he witness a performance by a clown in town. The depressed man then admits he is that clown.

At this point, I see two outcomes. The first one hints that I only want to keep learning and I'm not as interested in "deploying" that knowledge usefully. That is only partly true because, hey, I don't have a particle collider in my backyard that introduces new particles into my life so I can piece the universal puzzle together better. The second outcome suggests that my knowing a lot of things - science-wise and not - is, for the most part, a product of this fear of what-will-or-won't-come-next.

The first outcome doesn't bother me much because I've discovered I like teaching. Even though I may not be using my knowledge of IC engines to fix vehicles on desolate highways, I try and ensure as many people as possible understand how such engines work and do what they want with that knowledge. The same applies for particle physics. However, in this case, the dimension of teaching acquires more weight because its capacity to be misunderstood is great: it's a developing field whose foundations are currently under fire, whose experiments are so complex that multiple governments are helping fund it, whose conclusions are so counter-intuitive that the layman and the physicist are today many perspectives apart.

The second outcome is something I learnt while writing this post. The impetus that holds my two-decade-long learning spree up is nothing but fear, a fear of the unknown. Not learning something on a given day, with me, seems like forgoing a chance to imagine something we might not physically live. It's like the Copenhagen interpretation of quantum mechanics - a.k.a. Schrodinger's cat: for as long as I don't open the box, the cat is both dead and alive, the experience is both there and not there.

Saying "I learn not because I want to" is too bland: I learn because I want to look into the darkest corners of the universe and not see something that I can't understand or gauge in some way.

The popularly perceived notion of beauty comes with inexplicability: the capacity of an entity to defy definition and/or predictability, to defy structure and exhibit a will of its own in form and function. However, the silent reminder we are everyday given that, no matter how far out into the universe we venture or how deep we probe into the atom, the laws of physics are the same is the soul of beauty. And the inexplicability I seek to defy by learning is simply understanding how the same thing that gave us the dung beetle also gave us the Carina Nebula, that the same thing that gave us the Monarch butterfly also have us black holes.

[caption id="attachment_21625" align="aligncenter" width="335" caption="This image of the Carina Nebula is composed of multiple shots taken from the Atacama Desert in South America."][/caption]

I think that's a fear I've enjoyed and enjoy having.

Tuesday, 31 January 2012

Strange test post

This is a test post.


All strange quarks, the third-lightest among quarks, have a spin of 1/2 by virtue of being fermions and a charge of -1/3 e. The particle itself holds the curious discintion of having been discovered (1947) before it was theorized (1964). The first observed particle that contained a strange quark was called a kaon.

Monday, 30 January 2012

Science education and statistical issues

This image below speaks volumes.

[caption id="attachment_21435" align="aligncenter" width="529" caption="From a report titled 'ASPIRES: Science and career aspirations (age 10-14)' compiled by the ASPIRES Project, London, 2012."][/caption]

What's keeping away the kids? More specifically, why is there an observable offset of interest from aspiration for children in the age group, as the report claims, 10-14? Here are some snippets from an otherwise incredibly boring report.
Research shows that young people’s aspirations are strongly influenced by their social backgrounds (e.g. by ‘race’/ethnicity, social class and gender) and family contexts where identity and cultural factors play an important role in shaping the perception of science as ‘not for me’.

To this, the report suggests as a solution a broadening of scope in classrooms, to make science a "conceivable career" for students. But that seems to be trivializing the problem, which I think won't get sorted until classrooms are targeted individually. The problems of race and social class (or of caste and poverty in India) cannot be generalized impact-wise in any sense.
... countries with high attainment and participation rates in mathematics (such as Japan) also record amongst the lowest levels of student liking for the subject.

In India, in 1966, the Kothari Commission Report was submitted by Dr. D. S. Kothari to the Prime Minister. The report recommended that the government had to focus on a carefully chosen set of subjects in order to bolster its economy to meet certain important targets in the engineering sector.

Unfortunately, the curriculum that was created those five decades ago spurred a surplus of science and engineering graduates as well as colleges and institutions, conceiving a fixation that these and related courses translated to job security. Even though the situation may seem to be different today, a close examination will reveal that any Indian family is half-composed of engineers and management graduates.

My point is that participation rates - which could be purely because parents insist that their children study this or that and nothing else - don't necessarily translate into liking for the subjects. And when participation rates in schools are used by committees and organizations to predict what the future composition of graduates will be, their reports are likely to discourage further action in the sector.

The next point I strongly agree with, in both global and Indian contexts.
Science education policy has been strongly criticised for assuming that its primary importance is to prepare the next generation of the nation’s professional scientists (the ‘science pipeline’ model).

The 'science pipeline' model also goes on to create a certain profile of the larger science population (such as the image of a geek), to which certain minority groups may not be able to relate. That leads to discouragement and a closeting of aspirations.

The solution? I don't know. Maybe awareness? I'm skeptical.

Saturday, 21 January 2012

Particle accelerators: Locations

[googlemaps http://maps.google.com/maps/ms?msa=0&msid=201473859600784872636.0004b7059c99aa608686a&ie=UTF8&t=m&vpsrc=0&z=1&output=embed&w=425&h=350]

Sunday, 1 January 2012

Science, too, evolved.

From an article on LiveScience today:
New Hampshire House Bill 1148 would "require evolution to be taught in the public schools of this state as a theory, including the theorists' political and ideological viewpoints and their position on the concept of atheism." The second proposal in the New Hampshire House, HB 1457, does not mention evolution specifically but would "require science teachers to instruct pupils that proper scientific inquire [sic] results from not committing to any one theory or hypothesis, no matter how firmly it appears to be established, and that scientific and technological innovations based on new evidence can challenge accepted scientific theories or modes."

The bills, introduced by Jerry Bergevin, a Republican member of the New Hampshire House of Representatives, require evolution to be taught as a philosophy in schools and teachers to pay some attention to the "fact" that science often contradicts itself as a result of new evidence being found everyday. Perhaps someone should inform Mr. Bergevin that science has never contradicted itself except in the case of explicitly stated paradoxes. In every other case, if two paradigms seem to contradict each other, the one less compatible with the more carefully-made observations is discarded and is no longer considered a part of science. And if he thinks that's the reason science cannot be trusted, I'm very scared to consider Mr. Bergevin's opinion of a democracy.

And what's with Republicans and Darwinian theories, anyway?

Friday, 16 December 2011

The dualism of determinism

At the centre of scientific inquiry lies enshrined the deterministic proclivity of empiricists: by understanding the Universe, we no longer remain completely at "its mercy" but, with the knowledge accrued through methodological investigation, become capable to extend our influence to include nature itself. Through understanding, it is that we may finally assume the seat of the charioteer. It is not so much a fear of any absent measure of control over our own lives so much as it is the want for all that control upon discovering that such is possible. In that regard, would that empiricism be the most accessible medium of inquiry and epistemological pursuit, its ontology then finds purchase in the basis of determinism—social and otherwise.

On the polar-opposite of this weltanschauungen lies the god-believer, whose belief in a supernatural being is a reflection of his or her surrender to the summa of indeterminable entities. The being itself, however, is one of human construction and, thus, the truism of its influence is incalculable. As opposed to the case of the atheist/"moral autodidact", the case of religion is one of the fear of want of all that control and not its absence. Given its foundations in ontological and soteriological bases, religion in its current form and denomination serves to discipline more than explain. However, be that as it may.

Even if not for the sake of this argument, consider: for as long as the positivist argument cannot defy its contradiction by the argument of universal negation, science, too—as a construction of constructivist empiricism—cannot lay claim to assisting with the "process of determination" because it remains just as much a construction as is religion. Does that not mean that they partake in equal measure of the doubts of the thinker? Does he not dispute that one—probably science—is assuredly deterministic while the other is not when it is so clear that the privilege of proving a tautology rests with neither side?

Sunday, 6 November 2011

Some important calculations and comparisons

For the Tirunelveli district in Tamil Nadu, India, consider the following chart that shows the variation of irradiance against the time of year.

[caption id="attachment_20580" align="aligncenter" width="600" caption="Values courtesy Weather Underground (data from the US National Weather Service)"][/caption]

Considering the energy ratings of the major solar panel-manufacturers in the market, the average efficiency at which they perform (14%), and the average cell temperature on a particular day of the year (34 degrees Celsius), it can be estimated that to produce 169.1875 watts of energy, an initial investment of US $989,688.125 will have to be made against the installation of 1,000 panels, each measuring 1m to a side. Given that there are those who suggest solar energy be used instead of commissioning a nuclear power plant in the district, it should be noted that to make up for the deficit of 2 gigawatts, 11,821,205 panels will have to be installed covering a 11.82 sq. kilometer swath of land at a total cost of US $69,149,911,828.50 or Rs. 3 lakh 39 thousand crore. That is 26 times more than the cost of the two reactors coming up Koodankulam.

The amount of photovoltaic waste generated per one megawatt of peak power, as reported by a study conducted by the German Federal Ministry of the Environment (called the 'Development of the Take Back and Recovery System for Photovoltaic Products', 2007) being 6079.375 tonnes per year (distributed across the planet for the sake of consumption in India), the quantity of waste generated by the year 2020 will be 109,428,750 tonnes. The amount of radioactive nuclear waste generated in the same period in India stands between 3960 tonnes and 5940 tonnes.

Thus, the principle agent owing to which the focus of the media is greater on nuclear energy and lesser on other forms of energy production is radioactivity. If spent nuclear fuel wasn't radioactive, nuclear power plants would make a case for themselves considering they're much cheaper to install, maintain and produce much less waste. That is why it is important to know the processes involved in the manufacture of solar cells and the more-common health hazards posed by those processes: high-efficiency cells contain the mildly-toxic tellurium and indium, and the highly-toxic cadmium and sulfur. Of them, the last three are known to have teratogenic effects.

I'm not trying to build a case against renewable energy resources nor am I trying to build a case for nuclear energy. I'm only making known what the majority do not already see, may not wish to see or simply refuse to see.

Some important calculations and comparisons

For the Tirunelveli district in Tamil Nadu, India, consider the following chart that shows the variation of irradiance against the time of year.

[caption id="attachment_20580" align="aligncenter" width="600" caption="Values courtesy Weather Underground (data from the US National Weather Service)"][/caption]

Considering the energy ratings of the major solar panel-manufacturers in the market, the average efficiency at which they perform (14%), and the average cell temperature on a particular day of the year (34 degrees Celsius), it can be estimated that to produce 169.1875 watts of energy, an initial investment of US $989,688.125 will have to be made against the installation of 1,000 panels, each measuring 1m to a side. Given that there are those who suggest solar energy be used instead of commissioning a nuclear power plant in the district, it should be noted that to make up for the deficit of 2 gigawatts, 11,821,205 panels will have to be installed covering a 11.82 sq. kilometer swath of land at a total cost of US $69,149,911,828.50 or Rs. 3 lakh 39 thousand crore. That is 26 times more than the cost of the two reactors coming up Koodankulam.

The amount of photovoltaic waste generated per one megawatt of peak power, as reported by a study conducted by the German Federal Ministry of the Environment (called the 'Development of the Take Back and Recovery System for Photovoltaic Products', 2007) being 6079.375 tonnes per year (distributed across the planet for the sake of consumption in India), the quantity of waste generated by the year 2020 will be 109,428,750 tonnes. The amount of radioactive nuclear waste generated in the same period in India stands between 3960 tonnes and 5940 tonnes.

Thus, the principle agent owing to which the focus of the media is greater on nuclear energy and lesser on other forms of energy production is radioactivity. If spent nuclear fuel wasn't radioactive, nuclear power plants would make a case for themselves considering they're much cheaper to install, maintain and produce much less waste. That is why it is important to know the processes involved in the manufacture of solar cells and the more-common health hazards posed by those processes: high-efficiency cells contain the mildly-toxic tellurium and indium, and the highly-toxic cadmium and sulfur. Of them, the last three are known to have teratogenic effects.

I'm not trying to build a case against renewable energy resources nor am I trying to build a case for nuclear energy. I'm only making known what the majority do not already see, may not wish to see or simply refuse to see.

Friday, 4 November 2011

The nuclear energy issue

I'm the worst environmentalist around. I can perfectly understand the science of it but, beyond that, I find things to be uncharacteristically slippery. One of the causes I attribute this disorientation to is that environmentalism today has assumed a mostly reactionary nature. I agree that it is completely justified: if we don't work against the ruthless developers today, we won't have greenery to appreciate tomorrow. But I'm not passing judgment, I'm only observing that it has become purely reactionary and that is something I can't tackle very well.

In this context, when some scientist who used to work for the Atomic Energy Commission speaks up that India doesn't really need nuclear power but could make do with solar power, the suggestion itself becomes appropriated as defense against the government's push to establish an NPP in Koodankulam. The principles behind the suggestion, however, are left for the scientists and engineers to deliberate upon. Although the government will consider the advantages and disadvantages of such an installation, the most relevant question is whether such solar farms can withstand the growing energy requirements.
  1. Given Tamil Nadu's energy needs and the cost of producing 1 watt of energy from 1 solar cell, the investment will have to be somewhere close to Rs. 17,000 crores (and this excludes all of the costs to follow). However, in order to produce 2 gigawatt of energy in a continuous manner, high-efficiency solar cells will have to be used, with said number hovering somewhere around 40%. This, in turn, places a great stress on the cadmium, mercury and sulfuric acid industries, which are essential components of a solar farm.
  2. Given the intermittent nature of solar electricity generation, batteries will have to be integrated with the farming grid in order store energy for later use, and as all engineers are aware, there is a conversion from electrical to chemical energy that occurs with a loss of close to 12%.
  3. In each photovoltaic cell, during operation, a photon with a frequency in the visible spectrum of Em radiation knocks out an electron from the valence band to the conduction band, generating a small potential difference that gives rise to a current. The problem is that a single good cell creates only 1.5 volts across its electrodes.
  4. Solar cells produce only direct current (DC), which cannot be used for powering appliances before it has been converted to alternating current (AC). Such additions add significantly to the cost of installing a solar farm; at this juncture, being aware of the "money is not a problem" train of thought...
  5. Solar electricity is "understood" by policy-makers in terms of its feasibility and grid parity. Feasibility takes into account the amount of solar energy an area receives across the span of one year and, at the given efficiencies, if the solar cells will produce enough energy to achieve parity. Subsequently, grid parity is a determination of whether the cost of electricity generated by other means is less than the cost of electricity generated via solar cells. With twenty-two nuclear power plants in various stages of operation in India, grid parity cannot be hit without installing solar farms that generate tens of gigawatts of energy. Now, return to point #1. 
At the most fundamental level, solar farms can be grid-connected to reduce the load on existing coal-fired and nuclear-powered plants; this essentially means that an installed solar farm can make up for an extant shortage. The principle reason due to which the Indian government is pushing for an NPP is that it wants to define industrial growth rates for the future, and the current status of research and developments on solar cells is insufficient to handle such growth rates. Now, this is purely an indictment of the government's ambitious visions that, more often than not, disregard regional issues, but at the same time, it is definitely something that environmentalists must make a note of before they (or we?) begin to campaign for it.

Such a consideration is important because it characterizes an attitude. The ongoing protest at Koodankulam by its inhabitants is completely justified because it will impact their livelihoods. When a firebrand environmentalist jumps in, however, and calls for solar farms to be installed instead, he is not making a good case for himself; what he should've called for instead is that the government reexamine its development policies. He should've protested against the government's attempts to maintain a "sustainable" growth rate by effecting drastic projects instead of providing alternate solutions that will only continue to sustain such unreasonable expansion. And for as long as reactionary environmentalists campaign against a particular thing, they will not have campaigned for the right reasons.

(What are the right reasons? Against crazy growth rates? How is a crazy growth rate defined when there is no notion of a fixed quota? Where do we draw the line? I'll save that for another day and another post.)

On a final note: why the Indian government insists on installing an NPP along the coast is beyond me. Many of the lessons of Fukushima have been learnt too literally by other countries - the safeguards that were missing there aren't missing everywhere else - but what a natural disaster did was shift the focus away from having a copious source of radiation established along the seashore. At the least, the government could have assuaged opposition from such quarters by planning for the NPP to be installed further inland.

The nuclear energy issue

I'm the worst environmentalist around. I can perfectly understand the science of it but, beyond that, I find things to be uncharacteristically slippery. One of the causes I attribute this disorientation to is that environmentalism today has assumed a mostly reactionary nature. I agree that it is completely justified: if we don't work against the ruthless developers today, we won't have greenery to appreciate tomorrow. But I'm not passing judgment, I'm only observing that it has become purely reactionary and that is something I can't tackle very well.

In this context, when some scientist who used to work for the Atomic Energy Commission speaks up that India doesn't really need nuclear power but could make do with solar power, the suggestion itself becomes appropriated as defense against the government's push to establish an NPP in Koodankulam. The principles behind the suggestion, however, are left for the scientists and engineers to deliberate upon. Although the government will consider the advantages and disadvantages of such an installation, the most relevant question is whether such solar farms can withstand the growing energy requirements.
  1. Given Tamil Nadu's energy needs and the cost of producing 1 watt of energy from 1 solar cell, the investment will have to be somewhere close to Rs. 17,000 crores (and this excludes all of the costs to follow). However, in order to produce 2 gigawatt of energy in a continuous manner, high-efficiency solar cells will have to be used, with said number hovering somewhere around 40%. This, in turn, places a great stress on the cadmium, mercury and sulfuric acid industries, which are essential components of a solar farm.
  2. Given the intermittent nature of solar electricity generation, batteries will have to be integrated with the farming grid in order store energy for later use, and as all engineers are aware, there is a conversion from electrical to chemical energy that occurs with a loss of close to 12%.
  3. In each photovoltaic cell, during operation, a photon with a frequency in the visible spectrum of Em radiation knocks out an electron from the valence band to the conduction band, generating a small potential difference that gives rise to a current. The problem is that a single good cell creates only 1.5 volts across its electrodes.
  4. Solar cells produce only direct current (DC), which cannot be used for powering appliances before it has been converted to alternating current (AC). Such additions add significantly to the cost of installing a solar farm; at this juncture, being aware of the "money is not a problem" train of thought...
  5. Solar electricity is "understood" by policy-makers in terms of its feasibility and grid parity. Feasibility takes into account the amount of solar energy an area receives across the span of one year and, at the given efficiencies, if the solar cells will produce enough energy to achieve parity. Subsequently, grid parity is a determination of whether the cost of electricity generated by other means is less than the cost of electricity generated via solar cells. With twenty-two nuclear power plants in various stages of operation in India, grid parity cannot be hit without installing solar farms that generate tens of gigawatts of energy. Now, return to point #1. 
At the most fundamental level, solar farms can be grid-connected to reduce the load on existing coal-fired and nuclear-powered plants; this essentially means that an installed solar farm can make up for an extant shortage. The principle reason due to which the Indian government is pushing for an NPP is that it wants to define industrial growth rates for the future, and the current status of research and developments on solar cells is insufficient to handle such growth rates. Now, this is purely an indictment of the government's ambitious visions that, more often than not, disregard regional issues, but at the same time, it is definitely something that environmentalists must make a note of before they (or we?) begin to campaign for it.

Such a consideration is important because it characterizes an attitude. The ongoing protest at Koodankulam by its inhabitants is completely justified because it will impact their livelihoods. When a firebrand environmentalist jumps in, however, and calls for solar farms to be installed instead, he is not making a good case for himself; what he should've called for instead is that the government reexamine its development policies. He should've protested against the government's attempts to maintain a "sustainable" growth rate by effecting drastic projects instead of providing alternate solutions that will only continue to sustain such unreasonable expansion. And for as long as reactionary environmentalists campaign against a particular thing, they will not have campaigned for the right reasons.

(What are the right reasons? Against crazy growth rates? How is a crazy growth rate defined when there is no notion of a fixed quota? Where do we draw the line? I'll save that for another day and another post.)

On a final note: why the Indian government insists on installing an NPP along the coast is beyond me. Many of the lessons of Fukushima have been learnt too literally by other countries - the safeguards that were missing there aren't missing everywhere else - but what a natural disaster did was shift the focus away from having a copious source of radiation established along the seashore. At the least, the government could have assuaged opposition from such quarters by planning for the NPP to be installed further inland.

Monday, 3 October 2011

Clear and present danger

“Revolutions in information and communication technologies have always been based on small findings in solid state physics” quips Dr. G. Baskaran, firmly establishing both the place and scope of technology. Affiliated with the Perimeter Institute in Waterloo, Canada, Dr. Baskaran is a renowned theoretical physicist. He recently delivered a short lecture at the Asian College of Journalism, speaking on everything from the role of science and the ongoing battle to explain super-luminary neutrinos to the future of science.

His statement couldn’t have come at a better time to remind the world of the necessity of science – and its techniques that we call technology. In the face of looming budget cuts in the USA and Europe, politicians and policy-makers have been raising serious questions about the necessity of everything from privately-owned small research labs to proposed upgrades to the Large Hadron Collider (LHC) at CERN.

The evolution of science and technology has been associated with greater unity amongst peoples, Dr. Baskaran said, and better health, wealth, education and opportunities to preserve our culture. “There is some responsibility also”, he adds with a confidence mature with experience.

With likely the greatest ICT revolution at its peak, his words suggest that the technology fuelling it is also maturing in the sense of its acceptance and social penetration. Perhaps it is time for the world to get on the wagon, increase its investments in R&D, and start saving up. The future it seems can stand only to gain because historical ties are snapping in the face of a rupture that is allowing previously-lagging nations like India and China give past-leader USA a run for its money. Increased capitalist traction in the form of tablet computers and smartphones should be thanked for this.

Perhaps the best example of such an opportunity is the increasing feasibility of multi-state-owned research laboratories. The pioneer in this regard is CERN, which was funded and built by 12 countries in 1954, a number that has increased to 20 since, and currently receives funding from 69 countries worldwide. Next in line are the soon-to-come International Linear Collider (ILC) quartered in Japan and the ITER (International Thermonuclear Experimental Reactor) in France, as brought to light by Dr. Baskaran.

Such projects ease the burden on countries that wish they had the data from experiments but can’t provide the land to build the lab in the first place. In the case of CERN, the land belongs to two countries, the running costs to 69 nations, the responsibility to more than 7,300 physicists and engineers, and the experimental data to 6.6 billion people. Such overwhelming benefits require only a distributed investment model and cross-border trust to encash it. Alas, the last factor is the most impeding.

Consider the discovery of the super-luminary muon neutrinos detected at the Gran Sasso National Laboratory in Italy on September 23. In the absence of a unifying agency, the data would have been consumed by Italian researchers alone, keeping the world at bay for howsoever long it took to verify the results and get them published.

Now, a Puerto Rican or a Chilean has as much chance of explaining the phenomenon as does a Pakistani or Indian scientist. In fact, not only does the entire scientific community benefit by the sharing, but the chances of discovering something that will define the next big revolution are also increased.

(When asked about the strange occurrence, Dr. Baskaran asserted that owing to the small mass and low interactivity of the neutrinos, the existing energy generation technologies would not change as much our perceptions of the Universe. That, in turn, he said, will present new possibilities to produce more energy.)

A persisting sign of hope for India is its assistance with the construction of superconducting magnets at the LHC that even now are energizing beams of protons, and its significant contribution to the establishment of ITER. Further, Dr. Baskaran also revealed the news of a proposed Indian Neutrino Observatory (INO) at Theni, to be run by the government of India.

Alright, enough of taking comfort from the successes of the present; where are we headed? What does the future of science look like? The Tevatron has been closed, the baton has been passed to Europe to continue to look for the Higgs boson, the INO is under construction, and scientific representation is on the up. What about nanotechnology? It’s common knowledge that the Indians didn’t pay sufficient heed to Mr. Feynman. Is there still some space at the bottom?

We wouldn’t know, or, as Dr. Baskaran says, “There is nanomoney being spent on nanotechnology.” Employing India’s rise as an important centre for cheap but good medical care, he points out the important sectors our industries can capitalize on if it only took nanotech to the common man, akin to Gandhi’s talisman. There’s drug delivery, magnetic-resonance imaging, NEMS (nano-electromechanical systems), and, on another note, quantum computing. With continuing failure to look into these sectors, we're not only losing out on the international arena but we are also denying our citizens the opportunities to employment, to knowledge, to possibility.

So, are we again looking at the dearth of planning that has failed to incentivize the study of science in the country? Yes, at least in part. However, initiatives like InSPIRE – which is a 5-week long immersion program that reconnects Indians abroad to Indians at home – bear promise. On a final note, Dr. Baskaran insists that instead of continuing to depend on the government, which in turn depends on internally available resources, it is time to utilize the abundance of intellectual property within the nation and trust in the democracy of science.

Clear and present danger

“Revolutions in information and communication technologies have always been based on small findings in solid state physics” quips Dr. G. Baskaran, firmly establishing both the place and scope of technology. Affiliated with the Perimeter Institute in Waterloo, Canada, Dr. Baskaran is a renowned theoretical physicist. He recently delivered a short lecture at the Asian College of Journalism, speaking on everything from the role of science and the ongoing battle to explain super-luminary neutrinos to the future of science.

His statement couldn’t have come at a better time to remind the world of the necessity of science – and its techniques that we call technology. In the face of looming budget cuts in the USA and Europe, politicians and policy-makers have been raising serious questions about the necessity of everything from privately-owned small research labs to proposed upgrades to the Large Hadron Collider (LHC) at CERN.

The evolution of science and technology has been associated with greater unity amongst peoples, Dr. Baskaran said, and better health, wealth, education and opportunities to preserve our culture. “There is some responsibility also”, he adds with a confidence mature with experience.

With likely the greatest ICT revolution at its peak, his words suggest that the technology fuelling it is also maturing in the sense of its acceptance and social penetration. Perhaps it is time for the world to get on the wagon, increase its investments in R&D, and start saving up. The future it seems can stand only to gain because historical ties are snapping in the face of a rupture that is allowing previously-lagging nations like India and China give past-leader USA a run for its money. Increased capitalist traction in the form of tablet computers and smartphones should be thanked for this.

Perhaps the best example of such an opportunity is the increasing feasibility of multi-state-owned research laboratories. The pioneer in this regard is CERN, which was funded and built by 12 countries in 1954, a number that has increased to 20 since, and currently receives funding from 69 countries worldwide. Next in line are the soon-to-come International Linear Collider (ILC) quartered in Japan and the ITER (International Thermonuclear Experimental Reactor) in France, as brought to light by Dr. Baskaran.

Such projects ease the burden on countries that wish they had the data from experiments but can’t provide the land to build the lab in the first place. In the case of CERN, the land belongs to two countries, the running costs to 69 nations, the responsibility to more than 7,300 physicists and engineers, and the experimental data to 6.6 billion people. Such overwhelming benefits require only a distributed investment model and cross-border trust to encash it. Alas, the last factor is the most impeding.

Consider the discovery of the super-luminary muon neutrinos detected at the Gran Sasso National Laboratory in Italy on September 23. In the absence of a unifying agency, the data would have been consumed by Italian researchers alone, keeping the world at bay for howsoever long it took to verify the results and get them published.

Now, a Puerto Rican or a Chilean has as much chance of explaining the phenomenon as does a Pakistani or Indian scientist. In fact, not only does the entire scientific community benefit by the sharing, but the chances of discovering something that will define the next big revolution are also increased.

(When asked about the strange occurrence, Dr. Baskaran asserted that owing to the small mass and low interactivity of the neutrinos, the existing energy generation technologies would not change as much our perceptions of the Universe. That, in turn, he said, will present new possibilities to produce more energy.)

A persisting sign of hope for India is its assistance with the construction of superconducting magnets at the LHC that even now are energizing beams of protons, and its significant contribution to the establishment of ITER. Further, Dr. Baskaran also revealed the news of a proposed Indian Neutrino Observatory (INO) at Theni, to be run by the government of India.

Alright, enough of taking comfort from the successes of the present; where are we headed? What does the future of science look like? The Tevatron has been closed, the baton has been passed to Europe to continue to look for the Higgs boson, the INO is under construction, and scientific representation is on the up. What about nanotechnology? It’s common knowledge that the Indians didn’t pay sufficient heed to Mr. Feynman. Is there still some space at the bottom?

We wouldn’t know, or, as Dr. Baskaran says, “There is nanomoney being spent on nanotechnology.” Employing India’s rise as an important centre for cheap but good medical care, he points out the important sectors our industries can capitalize on if it only took nanotech to the common man, akin to Gandhi’s talisman. There’s drug delivery, magnetic-resonance imaging, NEMS (nano-electromechanical systems), and, on another note, quantum computing. With continuing failure to look into these sectors, we're not only losing out on the international arena but we are also denying our citizens the opportunities to employment, to knowledge, to possibility.

So, are we again looking at the dearth of planning that has failed to incentivize the study of science in the country? Yes, at least in part. However, initiatives like InSPIRE – which is a 5-week long immersion program that reconnects Indians abroad to Indians at home – bear promise. On a final note, Dr. Baskaran insists that instead of continuing to depend on the government, which in turn depends on internally available resources, it is time to utilize the abundance of intellectual property within the nation and trust in the democracy of science.

Saturday, 1 October 2011

Writing for science

Writing for science is no simple task. It requires a proficiency at writing to not only write well but also to know what bad writing is so that one can stay away from it. That’s only obvious considering a majority of the population isn’t scientifically aware, per se, and therefore, the product of the process should be both informative as well as instructive. The science writer’s task doesn’t stop at appraising the reader – while the news writer’s does – but includes the responsibility to make the reader understand what one’s writing about at all.

The audience for science writing is limited in the sense that its expansion is difficult and often expensive. On the one hand, there is the science writer herself: she must be versed enough with the subject to know what she’s talking about and its impact on the people (if any – and thus the sensitivities involved). On the other, there’s the reader, who must know of the event/phenomenon being written about, any significance associated with it that is being expounded on, and what stands to be taken away from the literary product.

In order to expand this entire sub-system of journalism and ensure that important scientific concepts and events penetrate the masses, there must be a simultaneous increase in commitment to the task on the science writer’s hand and another similar commitment on the reader’s hand.

As far as the writer is concerned, there are two important problems she faces.

Language

The language used in scientific writing can be of two kinds: technical and instructive. As far as technical writing is concerned, it can be repeatedly broken down to simpler and simpler points, rendering it fully instructive.As for the purely instructive parts – the parts in which the writer is educating the reader about an idea – the writer does not have to concern herself with conceptual barriers but only barriers pertaining to memory and cognition. Each sentence must not be more than 12 words long, each word must not have more than 3 syllables, and the flow of logic should be step-wise, linear and not convoluted.

Logic

Basically, the writer has the advantage of knowing what the big picture is. While trying to understand a smaller aspect of it, she has the benefit of drawing upon her knowledge in similar areas to construct a logical framework that she thinks explains the phenomenon. The reader may or may not know these other things, and since every writer writes to address the weakest link, she must start from scratch.For example, while attempting to explain topology, she cannot take the example of sets and set theory but instead must take the geometry path – more arduous but less complex.

*


A third and less-evident problem is that of choosing what to write about. Considering that the audience for science writing is, as such, small, and keeping in mind how easy it is for those numbers to dwindle, the reader must bear in mind that writing about concepts that are not directly impactful can be attractive only to those who are curious, don’t mind knowing or pursue a hobby in that direction. The writer may be curious but writing for the weakest link entails writing for a reader who is not curious.

This is not an interpretation that goes against any kind of scientific literature; there are different forums for different kinds of writing. When writing for newspapers and/or magazines, the value of the space which is being filled (by the piece) must be borne in mind: the priority is to familiarize the readers with a common scientific concept rather than going on about obscure phenomena. Once a sufficient guarantee is purchased toward retaining a section of the audience, other things can follow.

The reader making an effort in this direction – to somehow mobilize his resources toward acquiring scientific knowledge – only makes the writer’s tasks simpler. Even though it is not for the writer to ask of such things at any stage, a general awareness regarding science is mandated by other things, such as one’s livelihood and one’s responsibility toward the environment.

Thus, looking at this writer-reader interface from the reader’s side, the writer can be understood as fulfilling certain needs of the reader, needs that are defined by the latter’s lifestyle and interests, and in some cases, traditions as well as nationality.
"It shall be the duty of every citizen of India to develop the scientific temper, humanism and the spirit of inquiry and reform"

- 51(A), Indian Constitution

The difference between science news and other kinds of news is that the pervasion and penetration of science is 100 per cent as opposed to other events, whose impacts are determined by proximity, beliefs, professional interests and other such things. Even then, the situation is that the necessity for that kind of knowledge is mitigated by social and economic circumstances. For instance, a farmer who uses pesticides can be happy knowing which chemicals go well with the soil content, his crop and the kind of pests in his farm. He may not want to know the ingredients of the pesticides or the consequences in case of an overuse at all.

So, the responsibility to enhance the awareness of such a population can be bifurcated: as one-half is attended to by the media and the quality and quantity of stories they choose to address, the other half should be exposed to the technical aspect of their daily lives through programmes, workshops and field-trips (at the school level) as may be suitable. In other words, without knowing of the pertinence of a science column in newspapers, the value of that knowledge is next to nothing even as its necessity grows.

Monday, 5 September 2011

The relevance of racial realism

There are three people in a room: A, B and C.

They each possess the following skills in varying levels of excellence:

  1. Zeroth Skill (ZS)

  2. First Skill (FS)

  3. Second Skill (SS)




ZS: A > B = C

FS: A > B > C

SS: A < B = C

If A, B and C belong to three different races, and if the observations made above applied to all individuals of the same race as A, B and C, respectively, then isn't it fair to identify the strength of each trait in a race with the race itself? Deliberately obfuscating such a persistent pattern in light of the social and cultural damage that racism has wreaked may be right when moderated by humanitarian ethics. Then, however, the scientifically provable biological realism of the association (between skill and race) will be lost out (I haven't mentioned any such traits, and the notions of the racially associated skills in the comparison above is only hypothetical).
"If anything, I’m a race realist, and that simply means to recognize and understand the fact that there are racial differences and that these differences have an impact on society, education, crime, and many other aspects of life. It is not being a racist. It’s simply being a realist and a person who’s in search of solutions, rather than simply allowing these problems to continue to escalate."

Carol M. Swain

A century of heady and steady technological progress has, more than anything else, taught us that statistical determinism still largely remains a son of chance. That means we must be as economical as we can with the data we have in order to utilize our human resources best. The real problem of racism lies in the threat posed by its misunderstanding and misuse. As a consequence of racially driven wars and sociopolitical movements, so much as recognizing a race has proved deplorable, condemning racial realism along with it. I'm not a racial realist. I only postulate that if the biological realism backing racism (as in the factitude of race and not the discrimination along the lines of race) stands proven, then we mustn't back away from that conclusion because the history of one scientific fact was blighted by the idiocy of humankind.

Moreover, continuing from the example, discrimination-by-race can occur only when a community forms that encourages, say, an eminence of the First Skill and so lets a community form solely on the basis of an inequality: A > B or B > C or A > C. In that case, a skill's association with the race is not to be blamed just as much as letting that association lend itself to the creation of a community is condemnable. In other words, the claim that "A can do this better than B can" can reek of racial realism as much as it wants to, but it can't acquire by itself a socially judgmental connotation if not for the society that tags it so.

And now, I'll start reading The Bell Curve (1994).

The relevance of racial realism

There are three people in a room: A, B and C.

They each possess the following skills in varying levels of excellence:

  1. Zeroth Skill (ZS)

  2. First Skill (FS)

  3. Second Skill (SS)




ZS: A > B = C

FS: A > B > C

SS: A < B = C

If A, B and C belong to three different races, and if the observations made above applied to all individuals of the same race as A, B and C, respectively, then isn't it fair to identify the strength of each trait in a race with the race itself? Deliberately obfuscating such a persistent pattern in light of the social and cultural damage that racism has wreaked may be right when moderated by humanitarian ethics. Then, however, the scientifically provable biological realism of the association (between skill and race) will be lost out (I haven't mentioned any such traits, and the notions of the racially associated skills in the comparison above is only hypothetical).
"If anything, I’m a race realist, and that simply means to recognize and understand the fact that there are racial differences and that these differences have an impact on society, education, crime, and many other aspects of life. It is not being a racist. It’s simply being a realist and a person who’s in search of solutions, rather than simply allowing these problems to continue to escalate."

Carol M. Swain

A century of heady and steady technological progress has, more than anything else, taught us that statistical determinism still largely remains a son of chance. That means we must be as economical as we can with the data we have in order to utilize our human resources best. The real problem of racism lies in the threat posed by its misunderstanding and misuse. As a consequence of racially driven wars and sociopolitical movements, so much as recognizing a race has proved deplorable, condemning racial realism along with it. I'm not a racial realist. I only postulate that if the biological realism backing racism (as in the factitude of race and not the discrimination along the lines of race) stands proven, then we mustn't back away from that conclusion because the history of one scientific fact was blighted by the idiocy of humankind.

Moreover, continuing from the example, discrimination-by-race can occur only when a community forms that encourages, say, an eminence of the First Skill and so lets a community form solely on the basis of an inequality: A > B or B > C or A > C. In that case, a skill's association with the race is not to be blamed just as much as letting that association lend itself to the creation of a community is condemnable. In other words, the claim that "A can do this better than B can" can reek of racial realism as much as it wants to, but it can't acquire by itself a socially judgmental connotation if not for the society that tags it so.

And now, I'll start reading The Bell Curve (1994).

Wednesday, 27 July 2011

The invisible bridge

In a college of journalism, you'd think the blog posts would have to be better researched, better articulated and better delivered. While the latter two are a matter of personal choice, the first - good research - becomes unnecessary. When I was in an engineering institution for my UG, I blogged so frequently and so much that people began to take notice: I was "the blogger". At that point, the things I wrote about had nothing to do with engineering and it is quite a surprise how there are many engineers who'd have trouble understanding Foucault or Nietzsche or Machiavelli. Consequently, I began to research my posts well, ensuring no wrong information got out - directly or as a matter of interpretation.

Here, at ACJ: one misstep and I'm screwed. Both honest and crazy intellectuals will pounce on the slightest of mispronunciations and attempt to secure an argumentative victory. I don't rant out facts and statistics like I used to not because I've forgotten them but because it seems as though just listening to those around me is going to vindicate half the fee-amount I paid. At first, all the hooked-up contentiousness seemed sensational. Now, it seems abjectly pointless. Even in institutions such as this, apparently, there are some people caught between the states of "good journalist" and "bad journalist". I could be one of them, but I must admit hypocrisy - my friends will observe - has not bothered me. Being against my journalistic cause as it may seem to be, I'd clarify that this is a different kind of hypocrisy. Yes, another kind.

Enough of that BS now. Picking up from where I left off: blogging in an encouraging environment seems to be weaker than blogging in a discouraging environment. From an engineering standpoint, that's only sensible: the work done by a system working between two states that have a significant difference in enthalpies is much higher than the work done by a system working between two states that have a small difference in enthalpies. However, that also entails the input to be greater (because a greater "quantity" of work needs to be done to increase the potential energy of the system to so-so much). Similarly, blogging in a discouraging environment is bound to produce greater results if only I'd persist with it (keeping the time-frame finite, the results are only going to be better than what they were during the period of encouragement - which should do).

I love thinking like how an engineer would about things: systematically, without any fuss whatsoever, always knowing full well that if something's wrong, I'm also going to know if or if not it's going to be in my hands to fix it. If it is, then I will. If it isn't, then I won't. The mathematics at work behind all this structure and formality ensures that if things are right in principle, the rest will fall into place. That's the invisible bridge that spans the distance between cause and effect. Science is the grammar when literature's the right reasons, and if you see the problem - any problem - all you have to understand is that you're looking at many invisible bridges waiting to be stepped on.