Misplaced Pages

:Reference desk/Science: Difference between revisions - Misplaced Pages

Article snapshot taken from[REDACTED] with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
< Misplaced Pages:Reference desk Browse history interactively← Previous editNext edit →Content deleted Content addedVisualWikitext
Revision as of 01:23, 11 January 2016 editAnna Frodesiak (talk | contribs)Extended confirmed users117,217 edits Wind direction confusion← Previous edit Revision as of 01:27, 11 January 2016 edit undoMedeis (talk | contribs)Extended confirmed users49,187 edits WP:NOTHOWTO It's bad enough we have what are at best homework questions, and at worst requests for professional advice. See the guidelines at the top of this page and adhere to them.Next edit →
Line 830: Line 830:


: It depends on whether you're talking about ], ]s (like ] or ]), antipsychotics with antihistamine activity (like ] or ]), drowsy antipsychotics without antihistamine activity (]), ]s (like ]) , or a ]. ] (]) 23:36, 10 January 2016 (UTC) : It depends on whether you're talking about ], ]s (like ] or ]), antipsychotics with antihistamine activity (like ] or ]), drowsy antipsychotics without antihistamine activity (]), ]s (like ]) , or a ]. ] (]) 23:36, 10 January 2016 (UTC)

== synthesis of tetrabutylammonium borohydride versus a PTC reaction of ] + ] ==

I have a large amount of ] (although the iodide form is also available for about 130-140% the cost). While I'm waiting for another reagent to arrive, I've wondered about using and synthesising ''tetrabutylammonium borohydride'' (TBA-BH4) directly. One advantage would be that maybe I would need to use less water, or maybe not even perform a PTC reduction at all. A hypothetical procedure is as follows:

# Dissolve TBAB in acetone
# Dissolve sodium borohydride in acetone**
# sodium bromide precipitates
# Decant acetone from sodium bromide
# Quickly evaporate (rotovap/vacuum) acetone before it has a chance to significantly react with the borohydride**
# TBA-BH4 remains

** half-life of sodium borohydride in acetone is about 13 minutes, or 90% rxn in 40 minutes

If I'm able to evaporate the acetone quickly, I think I could get a decent yield. Of course, acetone is still moderately reactive, and I'm not sure if I want significant alkoxide counterion impurities. Is there another choice of solvent to use a Finkelstein-ish type metathesis (]?)

Partially this is due to my concerns about exposure of newly-formed imines to water even in a PTC rxn. I need the borohydride to react with the imine, but I also don't want my water to react with my imine. Is there a way to promote the lipohilicity of the borohydride ion ? ] (]) 00:23, 11 January 2016 (UTC)


== Wind direction confusion == == Wind direction confusion ==

Revision as of 01:27, 11 January 2016

Welcome to the science section
of the Misplaced Pages reference desk. skip to bottom Select a section: Shortcut Want a faster answer?

Main page: Help searching Misplaced Pages

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.


Ready? Ask a new question!


How do I answer a question?

Main page: Misplaced Pages:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


January 5

Fighting fire with fire

When fighting fire with fire, how do they know the right time to ignite the backburn? 2601:646:8E01:9089:5D45:D5AF:855B:E677 (talk) 01:16, 5 January 2016 (UTC)

Not a subject expert, but I believe the primary issue in when to ignite controlled burns is the weather. The wind needs to be blowing the right way, towards the main fire, and to be reliable: no sudden changes in wind speed or direction. Aside from that I don't think there's a specific time when it's "right" to start the fire (besides "before the main fire is on top of you" I suppose). The basic idea is to burn fuel with a fire that goes towards the fire you're fighting. Then there's nothing left to burn, and the "bad" fire ideally just dies. --71.119.131.184 (talk) 01:52, 5 January 2016 (UTC)
There's the aspect of burning the fuel before the main fire arrives, but there's also an effect of changing the direction the air is flowing. Firestorm discusses cases where fires change air flow patterns. So, in this case the timing would be more critical, since the wind direction can only be changed while the fire is actually burning. StuRat (talk) 02:32, 5 January 2016 (UTC)
My only knowledge of this is from seeing it done on television and I can't see anything online that describes it too well so I'll describe what I've seen from memory. As a fire burns it consumes oxygen and, in the case of large fires, they consume massive amounts. As the fire advances it sucks in the air all around it creating a vortex and the base of the vortex extends well in front of it. The firefighters stand in front of the advancing firewall and wait until they feel the air around them being pulled in towards the fire - which becomes pretty obvious as it creates an increasing rush of wind. At that point they light the fuel (forest litter, saplings, branches etc.), retire quickly and hope that the consumption of all available fuel leaves the fire with nothing to burn and nowhere to go. Needless to say, it's very dangerous, it's only used as a last resort, and it can make things worse., Richerman (talk) 11:18, 5 January 2016 (UTC)
Perhaps the best place to start reading is: the United States Forest Service webpage, Managing Wildfires. Also note that some states - like California - have massive state-level organizations (e.g., CalFIRE), and though there is much cooperation, these different organizations sometimes follow different rules and strategic policies for fire management.
Federally organized wildfire fighters use backburning, including drip torch crews. For prescribed fires - those that are set intentionally for land management - a specialist who is an expert in forestry management writes up an environmental assessment, and a bunch of paperwork process is required to make sure that the burn will be safe and legal. At a high level, this process is explained in the Guidance for Implementation of Federal Wildland Fire Management Policy document.
Here are a few great resources published by our Forest Service:
  • What the blazes is a prescribed fire?
  • Fire Management Today - a free periodical publication: each issue contains case-studies, technology and science reviews, and the latest and greatest news on federal wildfire management
    • The current issue has a whole exposé on using smart-phone apps and other technology for data fusion to combine weather, aerial firefighter reports and photos, and ground crew information, to make better on-the-spot decisions
  • Training Resources - linking to several short courses, training facilities, and other resources to help promote good fire-fighting policies
  • Interagency Prescribed Fire Planning and Procedures Guide, a detailed technical guide from the National Wildfire Coordinating Group that explains how to plan and implement a wildland fire by intentional ignition
Forestry is a big deal, and it's very scientific: you can get an advanced scientific degree in forest management, economics, ecology, and industrial applications. Proper management of natural resources is very important, for ecological and economic reasons; this discipline has therefore developed very rigorous techniques and theory. For example, here is information from the Forest Management program at my alma mater.
Nimur (talk) 15:46, 5 January 2016 (UTC)
Thanks, everyone! So from what you told me, when the wind shifts toward the main fire is the right time to break out the driptorch? (BTW, I'm NOT planning to do any controlled burning on my property or elsewhere -- only highly-trained firefighters are allowed to do it because it's so dangerous, that much I know.) 2601:646:8E01:9089:F88D:DE34:7772:8E5B (talk) 05:36, 7 January 2016 (UTC)
Weather, including wind, is only one of many factors that are considered. From the reading I did while browsing the sources I linked, it looks like wind and weather actually rank much lower on the list than other items, like paperwork (!), human factors, resource availability, proximity to roads and structures, terrain, and other considerations. All of that has a bigger impact than the present wind condition.
Even if we consider only the weather factors - it appears that fire hazard prediction (largely based on rainfall statistics and forecasts) play a bigger role than winds when firefighters are deciding course of action. Once a burn plan is decided at the strategic level, wind will make a much bigger difference at the tactical level, e.g. for the firefighters on the front lines.
Nimur (talk) 15:50, 7 January 2016 (UTC)

Where can I find a map of average daily temperature range in the US?

I can't seem to find one. Average seasonal range is easy to find but not average range between a day's high and low. Sagittarian Milky Way (talk) 01:30, 5 January 2016 (UTC)

At yr.no, you can get graphs of average high and low temperatures per month for a specific place, such as Washington D.C.. The data the graphs are based on is fairly old, though, and probably available elsewere as well. --NorwegianBlue 07:07, 5 January 2016 (UTC)
That would be a lot of work to make even a rudimentary map yourself. I wonder if there's one already made where it shows average daily range. For instance, the average daily range where I live is around 15°F, it should be more where there's more continentality and/or aridity (when I saw one of those charts for Montana I was like holy crap, it's 90°F in the day and 50°F at night). Sagittarian Milky Way (talk) 07:26, 5 January 2016 (UTC)
Average Daily Temperature Range
Seasonal Temperature Range for mean temperature
Annual Average Temperature Map
Occasionally, just occasionally, someone will ask a question like this and get really awesome results. I created the three images at right about two years ago in my professional capacity as Lead Scientist for Berkeley Earth. I've now arranged to release these images under CC-BY. The first image shows the average diurnal temperature range and should answer your question. Note that diurnal range also has a strongly seasonal component, and these are the annual averages. The second image is a similar map for seasonal temperature range since you also mentioned that. (Siberia is a hard place to live, though actually not so bad in the summer.) The last map shows the annual average temperature. For the record, I haven't added these images to any articles, so if someone can find some suitable page(s) to add these materials it would be appreciated. Dragons flight (talk) 10:56, 5 January 2016 (UTC)
That's very neat, thanks. I put them in a new article continentality cause I didn't want to mess with the important and image-filled climate, climatology and temperature articles. Others are welcome to think of other articles or see if they want to mess with those three. Sagittarian Milky Way (talk) 18:08, 5 January 2016 (UTC)
And the article was redirected back to humid continental climate. It wasn't very good but maybe continentality should redirect to continental climate instead? Humid continental is not very accurate as most of Alaska and Siberia is very continental (Verkhoyansk! 99°F to -90°F) but classified as subarctic climate. Sagittarian Milky Way (talk) 22:33, 5 January 2016 (UTC)
Great images. What are those two odd spots of large temperature differences in Poland on the first map? Mountainous regions? Fgf10 (talk) 12:08, 5 January 2016 (UTC)
No, not mountains. Basically I don't know. It is an odd enough blip that when I originally made this map I did go back and double check that this quirk was really in the input data, but nothing there looked crazy. I don't think it is an error or bad data (though I wouldn't totally rule that out either). If it isn't an error, my guess would be more variable local weather, but I don't have any really convincing explanation for a weather pattern like that either. Dragons flight (talk) 12:31, 5 January 2016 (UTC)
How interesting! Fair enough, cheers for the reply. Fgf10 (talk) 14:54, 5 January 2016 (UTC)
Maybe somebody forgot Poland. --71.119.131.184 (talk) 16:50, 5 January 2016 (UTC)
Those dots on Poland are interesting. I think the southern one might be the Błędów Desert, a man-made region of barren sand around a medieval silver mine, which our article on Poland describes as the country's "only desert". My guess is there's another man-made feature that hasn't been properly credited. Though I should also note that the area of high temperature extremes is bigger than the feature as marked on the map. Apparently it had been spreading, though recent efforts partially reversed this. Come back in a thousand years and I suppose much of Europe may be a double for the Australian outback... Wnt (talk) 18:59, 5 January 2016 (UTC)
The file pages say that it's extrapolated from weather stations so if the closest stations to the Błędów Desert one(s) are far away that could make it look very big. Sagittarian Milky Way (talk) 22:33, 5 January 2016 (UTC)

What's with the average daily temperature range in Antarctica ?

(Top chart.) There seems to be very little change for the most part, but one spot has quite a lot. Why is that ? StuRat (talk)

How many viable human DNA combinations are there?

Calculations that I found regarding DNA permutations seem to consider all combinations. However, many of these are obviously not viable humans. What is the total number of DNA combination minus the total number of combinations that would produce deeply disable, brain-dead or dead humans?--Scicurious (talk) 16:55, 5 January 2016 (UTC)

I doubt this is a question that can be answered (currently), given that we do not know gene expression or even protein folding into usable shapes well enough or fast enough to make such a determination. We're still figuring out even basics of expression such as epigenetics. --OuroborosCobra (talk) 17:00, 5 January 2016 (UTC)
Seconded. However, I will say that the vast majority (probably 99.9999%+) won't be viable. There are many proteins that are absolutely required for life, and it may only take a single base pair change for those to be dysfunctional. A random scramble will almost always result in something lethal. Fgf10 (talk) 17:49, 5 January 2016 (UTC)
I agree that the number in incalculable, but would argue that the nonviable mutation rate would not approach 100%. There are vast stretches of DNA in which most mutations would be neutral. Even for critical genes, there are databases full of variant genes (e.g. Online Mendelian Inheritance in Man), not all associated with a disease state. There is a bias towards the presence of a disease state in the databases, since health issues are the likely reason a search was initiated. The number of possible viable mutations and the total number are possible mutations are so vast that ratios are difficult to calculate, especially given that the probabilities have to be weighted since not all mutations are equally probable. additional reading BiologicalMe (talk) 18:31, 5 January 2016 (UTC)
Ignore User:Fgf10. The answer is potentially infinite, given non-coding junk DNA. μηδείς (talk) 21:09, 6 January 2016 (UTC)
Care to explain that one User:Medeis? You are most certainly wrong, since there are not an infinite number of bases in the genome. Also, take into acoount that 'junk DNA' is mostly not actually junk, but regulatory sequences. Fgf10 (talk) 17:41, 7 January 2016 (UTC)
Well, in a way it's all academic, since we can't calculate it. However, in your example, OMID mutations are those found in living humans. Therefore you can automatically say none of the mutations that would cause embryonic lethality are in there. In other words, the most lethal mutations will not be in that database. Furthermore, all those mutations are changes to an established, viable genome. If you were to generate a de novo genome randomly, what would be the changes you get even one working promoter for instance? For a viable human, you don't need one correct sequence, you need thousands. Fgf10 (talk) 18:43, 5 January 2016 (UTC)
Although as the previous responses have pointed out it is impossible to calculate precisely, I think we can safely say that there is a practically unlimited number of possibilites for a viable human genome. Even if just 0.0001 % of the 3 billion+ base pairs can be changed arbitrarily without effecting viability, you get over 10^1800 possible combinations, vastly more than can be realized in the lifetime of a billion universes. - Lindert (talk) 11:33, 6 January 2016 (UTC)
I agree - the answer is definitely "enough". Enough to make it astronomically unlikely that no two humans will ever (by chance alone) have identical DNA for as long as humanity can survive. But putting a number to it is an exceedingly complicated question who's answer lies far beyond what we currently know. SteveBaker (talk) 17:50, 6 January 2016 (UTC)
Lol, independent variables. You know there's an exception. :) Wnt (talk) 20:38, 6 January 2016 (UTC)
Well, my wife and I have identical twin girls...so, yeah, trust me, I know. But I did say "by chance alone". Identical twins are not identical because the dice were rolled and the one in 10 chance came up. They are identical because they started out as a single organism and split in two sometime shortly after formation. They are, in essence, a single person who was split in two very early on. SteveBaker (talk) 20:13, 7 January 2016 (UTC)

radio-activity

Is it possible for big storages of fissionable material with high ratios of released-binding-energy-expected per unit volume, to release subatomic reaction energy spontaneously causing unintended destruction ? :-) Thank you — Preceding unsigned comment added by Vijay Chary (talkcontribs) 18:33, 5 January 2016‎ (UTC)

I think the answer to your question is "no". Now, many fissile elements are radioactive, so they give off energy through radioactive decay, which can include spontaneous fission. This is why particularly radioactive materials need to be handled carefully. But, you're not going to get a nuclear chain reaction unless you do stupid things with large quantities of material (in which case you may get a criticality accident). Generating a self-sustained nuclear reaction, which you need for nuclear power or nuclear weapons, is actually difficult; a lot of people have this misconception that a nuclear reactor can explode like a nuclear bomb, but this is wrong. The problems with nuclear waste storage revolve around safely storing it while it decays. It would cause "unintended destruction" if not stored properly, but through radiation poisoning, not a big kaboom. --71.119.131.184 (talk) 20:14, 5 January 2016 (UTC)
A nuclear reactor can explode like a regular bomb, though. Sagittarian Milky Way (talk) 22:37, 5 January 2016 (UTC)
I don't know if the bit " a lot of people have this misconception that a nuclear reactor can explode like a nuclear bomb" is accurate. Of course, this depends on your definition of "a lot", but I would be shocked if this is true when "a lot " means "most people." On the other hand, people believe all sort of crazy stuff and one conspiracy theorists or the other would be delighted to scare people. Denidi (talk) 01:17, 6 January 2016 (UTC)
The movie Aliens might have contibuted to this misconception, since the nuclear reactor in that movie did indeed undergo a nuclear explosion once it lost coolant. I wonder if there is any way one could, if foolish enough to do so, design a nuclear reactor where this was actually possible. StuRat (talk) 05:42, 6 January 2016 (UTC)
Don't they have military-only designs that use weapons-grade fuel? I'm not up on nuclear physics but 1. It'd be really stupid to design it to go supercritical if the rods were pulled out fully. 2. It's not easy to make a nuclear bomb not fizzle if you've never made one before so unless the containment dome does something wonderful in extending the percent that fissions before it blows itself apart it might not be able to reach a traditional atomic bomb size (14-20 kilotons or so). Maybe you could keep tons of bomb-grade fuel subcritical with enough control rods so if you did a really stupid design like control rods that fall completely out of the fuel at the speed of gravity if the positioning electromagnets fail then maybe you could get another Hiroshima. Sagittarian Milky Way (talk) 16:22, 6 January 2016 (UTC)
I seem to recall an almost-as-stupid design where, while the control rods themselves would retard the reaction, the steel tips on the end had the reverse effect, by reflecting particles back into the core when first inserted. Thus, if you inserted lots of rods all at once, you could create a runaway reaction. I think it just caused a conventional explosion, though.
I suppose you could also get a rogue nation, like Iran, building a nuclear reactor designed to generate a nuclear explosion, if attacked, to discourage anyone from attacking (whether due to fallout concerns or the political consequences of causing it). StuRat (talk) 21:19, 6 January 2016 (UTC)
That was the RBMK reactor used at Chernobyl among other places, except it was graphite, not steel. And yes, it was stupid; the RBMK reactor is a fairly unsafe design built with the first priority being cheap and quick production of plutonium for nuclear weapons, with power generation as a side effect. After the Chernobyl accident they retrofitted the reactors to remove some of the more dangerous parts of the design, including that. --71.119.131.184 (talk) 21:45, 6 January 2016 (UTC)
"A lot" doesn't inherently mean "most", i.e. "a majority". It can just as easily mean a significant minority. ←Baseball Bugs carrots06:49, 6 January 2016 (UTC)
Look at Fissile material for the difference between fissile and fissionable. Also, check out Natural nuclear fission reactor about a prehistoric natural reactor in Gabon about 2 billion years ago, which was similar to your question. Tobyc75 (talk) 20:24, 6 January 2016 (UTC)
This is an exercise in semantics, really. A fizzle yield is not exactly a bomb, but it can separate enough hydrogen to blow up a containment building, and once the fallout is on the way you don't want to be there. Any storage pond of high level radioactive waste is just waiting for the power to go out, the diesel generators to fail, and they start catching on fire and producing a terrible mess. So call it a really poorly made, really dirty nuclear bomb, and you're at least technically right. Wnt (talk) 20:33, 6 January 2016 (UTC)

anthropology

If some members of anthropoid 'herds' evolved into proto-humanoid 'tribes', why does the community of anthropologists claim that this actually occurred in south afrika ?  :-) — Preceding unsigned comment added by Vijay Chary (talkcontribs) 18:33, 5 January 2016‎ (UTC)

I took the liberty of splitting your questions into different sections. Anyway, the generally-accepted view that modern humans arose in Africa is known as the Out-of-Africa theory; that article may be informative. There's a lot of evidence for this. I'm not quite sure what you're asking exactly. Our recent (on evolutionary time scales) ancestors were not herd animals; "herd" has a specific meaning in biology and zoology. Primates, including humans, are generally tribal, living in tribes of between a handful to a few hundred individuals. Humans are believed to have lived similarly before the advent of agriculture. (Dunbar's number might be of some interest.) Also, modern humans are believed to have arisen more in East Africa than farther south, though you might be using "South Africa" to refer to all of Sub-Saharan Africa. --71.119.131.184 (talk) 20:14, 5 January 2016 (UTC)
Addressing a couple of possible misunderstandings suggested by the OP's wording . . . .
A member (i.e. an individual) never evolves into anything. Evolution happens to populations of many individuals over numbers of generations, whereby the carriers of some alleles ('versions') of genes die before reproducing more often, while carriers of other alleles of those genes die before reproducing less often, resulting in a descendant population with different proportions of the differing alleles (in some cases, the proportion might be zero). Accumulations of such allele differences may eventually give rise to noticeable physiological differences.
Very occasionally, more major changes (such as a doubling of a whole gene, a whole chromosome, or even a whole genome) may occur which give rise to noticeable differences immediately, and also result in "surplus" genes that can mutate to take up new functions, which might be beneficial, because the original copies of the gene(s) is/are still carrying out its/their original functions.
The reason most (though not all) anthropologists think human (or better, hominin) evolution probably took place in East and/or South Africa is because that's where we've found most of the oldest fossils of likely ancestors or near-ancestors of our particular human/hominin species, Homo sapiens sapiens. This however depends in part on where conditions favourable to preserving and finding fossils exist (fossilisation is a very rare event), and where we've looked so far: more discoveries elsewhere could, and might, modify those presumptions, because that's how Science 'works'. {The Poster formerly known as 87.81.230.195} 185.74.232.130 (talk) 21:29, 5 January 2016 (UTC)
I disagree with your final statement, that might have been true 100 years ago, but since genetic studies over the last 30 years I don't believe fossils are the reason, or even the main reason why most anthropologists accept the Recent_African_origin_of_modern_humans model. Our article has some good info about the evidence. Vespine (talk) 00:19, 6 January 2016 (UTC)
I concur, but the OP's "anthropoid to humanoid" wording was ambiguous as to whether he was asking about (or even understood distinctions between) Homo sapiens, earlier Homo spp, and other/earlier hominins, and therefore what time frame he was taking about. {The poster formerly known as 87.81.230.195} 185.74.232.130 (talk) 12:47, 6 January 2016 (UTC)

Elements past period 7

With the seventh period of the Periodic Table filled in, is it going to take a massive additional effort to get period 8 elements? Bubba73 20:28, 5 January 2016 (UTC)

I don't know who the best person to ask is. For anyone answering this question, please try to focus primarily on ununennium (element 119) and unbinilium (element 120) and not too much about elements 121 and up. Georgia guy (talk) 20:39, 5 January 2016 (UTC)
Attempts at those two is noted in our ununennium and unbinilium articles. Extended periodic table is our larger-perspective article on "beyond period 7" possible stability details, etc. DMacks (talk) 20:45, 5 January 2016 (UTC)
To get 119 and 120? No. To get anything above that? Yes. Double sharp (talk) 15:30, 6 January 2016 (UTC)
  • The periods of the periodic table are related to the properties of the electron orbits (the periodic table is primarily a tool for chemists, and chemists don't usually care much about what's going on at the nuclear level). The reason heavy elements are unstable meanwhile is to do with the properties of the nucleus (specifically, the repulsion between protons and proton and neutron energy levels within the nucleus). There's no direct link between the two, so synthesizing a light period 8 element shouldn't be much different to synthesizing a heavy period 7 one. It's predicted that period 8 should include some relatively stable elements (where "relatively stable" still means half-lives measured in fractions of a second) in the so-called island of stability. Smurrayinchester 09:45, 8 January 2016 (UTC)
    • The problem with the island of stability is that no one is sure exactly where it is, save that it is around the heavy period-7 to light period-8 region. But if it is at the end of period 7 (around element 112 or 114), then period 8 would be seriously challenging. The current predictions give half-lives rapidly plunging into the microseconds after element 120 for the isotopes we can reach (remember that the superheavy isotopes we can currently synthesize are all neutron-deficient), and nuclides that don't survive a microsecond are not going to make it to the detector with current technology. Already elements 119 and 120 are predicted to be pushing the limits. The borderline of what we currently should be able to do of course has nothing to do with the period divide, but it is coincidentally awfully close.
    • J. V. Kratz incidentally has predicted that the next proton magic number (full proton shell) would be 120, which would very nicely tally with Zagrebaev's prediction that beyond 120 the atoms wouldn't make it to the detector in one piece. Double sharp (talk) 10:25, 8 January 2016 (UTC)

January 6

Homocysteine elevated

Recommendations for reducing levels of homocysteine

Presuming that you mean in humans, is what you're looking for covered in the Hyperhomocysteinemia article? Cannolis (talk) 02:17, 6 January 2016 (UTC)
Isn't that medical advice? It is impossible to know if a person is taking enough nutrients without a homocysteine blood test. So, go ask a doctor if you need on. --Denidi (talk) 16:41, 6 January 2016 (UTC)
It's not medical advice when you're giving a general indication, rather than prescribing to a specific patient. But in any case, the article above already says so much we'll be hard pressed to think of more (and if we do, we really ought to add it there and just ping here after we do) Wnt (talk) 20:35, 6 January 2016 (UTC)

Destroyer

Hello!

  1. What is the difference between an 'atomic' bomb and a 'hydrogen' bomb? What does each do? - simplistically.
  2. What is more powerful from the two enquoted above, or is it a 'molecular' destroyer?

Mr. Zoot Cig Bunner (talk) 20:15, 6 January 2016 (UTC)

See atomic bomb, then fission bomb and thermonuclear bomb (a hydrogen fusion bomb being the most common type). The latter is far more powerful, but uses the former to trigger it. A fission bomb is where atoms split apart, releasing energy, and a fusion bomb is where atoms fuse together, releasing energy. It might seem confusing that atoms can release energy either when splitting or joining, but, in general, larger radioactive atoms (like some isotopes of uranium and plutonium) release energy by splitting, while smaller atoms (like isotopes of hydrogen) release energy when fusing.
Not quite sure what you mean by a "molecular destroyer", but any molecule near a nuclear explosion will be destroyed, although no significant additional energy is typically released in that process (with a possible exception for fires that spread). You also might be interested in the theoretical antimatter bomb, which would be on the order of 10 thousands times more powerful yet. StuRat (talk) 20:19, 6 January 2016 (UTC)
Oh, great Stu, you and your gee dee antimatter bomb advocacy. Next you'll have the Romulans destroying Vulcan. μηδείς (talk) 21:06, 6 January 2016 (UTC)
Lol. -- Mr. Zoot Cig Bunner (talk) 18:32, 7 January 2016 (UTC)
In the Star Trek universe, I would have expected wider use of antimatter bombs, since they used matter/antimatter reactors to power their star ships. Specifically, a warp-capable torpedo could drop out of warp on the target and detonate immediately, with enough power to destroy a planet. But star ships within sight of each other, firing volleys back and forth, seems more exciting, so that's what we got. StuRat (talk) 21:14, 6 January 2016 (UTC)
Standard 24th century photon torpedoes carry 1.5 kg each of matter and antimatter (hydrogen specifically), giving on the order of 64 megaton yield. Do some maths and see what's needed to actually destroy a planet..... 82.8.32.177 (talk) 22:42, 6 January 2016 (UTC)
Well, it shouldn't be a problem to carry 1000x as much, maybe a million times as much. By comparison, a B-52 carried 31,500 kg in bombs, some 20 thousand times as much. StuRat (talk) 22:54, 6 January 2016 (UTC)
A B-52 load of mixed matter/antimatter would have a yield of 2.84x10 J. The Chixalub impact is estimated at 1x10 J. Still would barely make a dent. 82.8.32.177 (talk) 23:02, 6 January 2016 (UTC)
Then try a million times as much. What was the weight of the Enterprise supposed to be ? Try that much. StuRat (talk) 23:11, 6 January 2016 (UTC)
What do you mean by "destroy" a planet? There used to be (perhaps still is?) a Usenet group called alt.destroy.the.earth, whose FAQ explained that the group was for people who didn't "want the Earth to be there anymore". Just destroying civilization, or human life, or all life, well, that wasn't taking the matter seriously.
The gravitational binding energy of the Earth, according to that article, is around 2×10 J, which works out to something like 10 kg of antimatter, a trillion metric tons. I don't know what the towing capacity of Enterprise was supposed to be, but I doubt it's that bit. --Trovatore (talk) 23:05, 9 January 2016 (UTC)
Destroying all life on the planet will be quite sufficient. StuRat (talk) 00:02, 10 January 2016 (UTC)
I've seen in two movies, 1) a movie called Machete (film) where it was like a normal gun/pistol like, and 2) can't recall the movie name but a group of little kids command a group of 'space ship' vessels, one containing a 70-72 billion dollar worth of a gun (molecule destroyer), what they use to destroy a planet filled with 'a kind' of a specie. Yes they destroyed the whole planet with one laser lookalike shot.
Thanks btw Stu.
Mr. Zoot Cig Bunner (talk) 18:56, 7 January 2016 (UTC)
Briefly - there are two ways to get energy out of atoms - you can persuade a big atom to break apart ("fission")- or you can persuade some small ones to join together - tossing out a fraction of their mass as energy as they do so (fusion).
A bomb made with heavy elements like uranium or plutonium (a "fission bomb") are the simplest to make because those heavy atoms are so big and bloated that they are already trying to fall apart (hence the fact that they are radioactive). Just put together a big enough pile of the stuff (and do it quickly enough) and you have a nuclear weapon. But there is a snag - once you put more than about 10kg of plutonium in one place, it's going to go explode all by itself - this is called "the critical mass". To make a bomb, you take a couple of chunks, each weighing less than 10kg and throw them together (possibly using conventional explosives to do it fast enough) to make something weighing more than 10kg. But you can't make a bomb twice that big using that trick because if you take two 10kg chunks - each one is going to explode before you want them to. So if you need something much bigger than a Hiroshima-sized bomb - you need more, smaller pieces - and getting them all to slam precisely together at exactly the same moment becomes increasingly difficult as the size of intended explosion gets bigger. You have to slam the pieces together quickly or they'll get crazily hot and either melt or set off a half-hearted "fizzle" before you get it all together in one place to make a decent sized bang. Thats why they use conventional explosives to push the heavy plutonium/uranium together. It's tricky to get this right so the bomb doesn't "fizzle" - which has been a problem for several of the recent efforts in N.Korea and elsewhere. The bigger the bomb, the harder it gets.
With a hydrogen bomb, the idea is to force the teeny-tiny hydrogen atoms together so hard that they fuse together to form helium and produce a shit-load of energy in the process. Hydrogen by itself is extremely easy to handle - you can put an awful lot of it in a small place - and it won't explode or anything (well, so long as there isn't oxygen around). The trick is to force it all together tightly enough (and quickly enough) to unlock all of that energy. To do that, the usual trick is to use a regular fission bomb as a trigger. So now, a conventional explosion forces together a couple of chunks of plutonium or uranium - that explodes as a fission bomb - which compresses the hydrogen sufficiently to cause it to fuse into helium and make a much MUCH bigger bang. This is tricky because this all has to happen before either the conventional explosion or the resulting fission bomb explosion destroys the whole machine.
So hydrogen bombs are clearly harder than plutonium/uranium bombs to make - but the size of explosion you get from them is also vastly larger.
SteveBaker (talk) 20:06, 7 January 2016 (UTC)
That's a good starting point as an explanation, but it's incomplete enough on its own that I think it's actually kind of misleading. In most thermonuclear weapons, while certainly hydrogen fusion provides a significant part of the yield, I don't think it's the majority of it. The bigger contribution of the fusion component is that it generates massive neutron flux, which in turn causes fission in the uranium shell surrounding the bomb. Still, your explanation is right in principle; you can't make that much uranium fission all at once by ordinary means, but you can if you throw fusion into the equation. --Trovatore (talk) 03:51, 8 January 2016 (UTC)
Yeah - I was trying to keep things simple (our OP asked for a 'simplistic' answer). There is an additional category of weapon which works more exactly as I described - where the fusion reaction is deliberately NOT used to trigger a secondary fission event - and that is a neutron bomb. These weapons are hydrogen bombs with thin casings and they are engineered to produce a huge neutron flux and not produce such a large explosion. The idea is to kill people over a larger radius - but without doing so much damage to buildings and other infrastructure. The US invented them as a means to prevent large soviet conventional armies from taking over an area by killing their personnel without destroying other infrastructure. Adding a heavy uranium casing produces a bigger explosion from that secondary fission event. The lighter casing allows these neutron-bomb weapons to be fired from conventional artillery.
SteveBaker (talk) 14:29, 8 January 2016 (UTC)

Good to know that there is more than what I stated... Thanks guys. -- Mr. Zoot Cig Bunner (talk) 18:47, 9 January 2016 (UTC)

How tall could an artificial mountains be?

Using just known materials, how high could we pile them (formed like a mountain)? Would that be a less expensive, less risky possibility to go to space? That is, a pile built century after century reaching more than 50 miles high.--Scicurious (talk) 21:23, 6 January 2016 (UTC)

No. The current highest mountains give you an idea for what the limit is. All sorts of things happen to cause that limit. The ground underneath compresses, there is erosion from the top down, there are landslides, etc. And realize that it doesn't just take twice the effort to build a mountain twice as high. It would also be twice the diameter, which means 8 times the volume and mass (not counting compression), and the materials have to be lifted higher, so require more time and energy per block. So, you could well be looking at 16 times the effort to make a mountain twice as high. And the current tallest mountains at 5.5 miles high are nowhere near into space, so you would need to double the height (and increase the effort by 16), many times to get there.
A better approach might be to create a launch tube in an existing mountain, so the ship will leave the top at high speed, not having used any of the onboard fuel yet. StuRat (talk) 21:31, 6 January 2016 (UTC)
Consider Olympus Mons, a mountain the size of France and about three times the height of Mt Everest. Consider how much effort it would take to "build" a mountain of that size and then consider it's still just a "wart" on the surface of mars, you would need a mountain 4 times the height to get to space. Vespine (talk) 21:38, 6 January 2016 (UTC)
And it's only as large as it is due to Mars' reduced surface gravity (0.376 g, or about 3/8ths as much as Earth). StuRat (talk) 21:41, 6 January 2016 (UTC)
The cone building limit is approximately:
σ c ρ g {\displaystyle {\sigma _{c} \over \rho g}}
where σ c {\displaystyle \sigma _{c}} is the limit of compressional stress, ρ {\displaystyle \rho } is the density of the material, and g is the surface gravity. For granite (limit ~200 MPa, density ~3 g/cm), this works out to about 7 km, which is also about the prominence of the highest mountains. Most materials with better stress limits are also denser so it isn't easy to find materials that can be piled higher. I'm not really sure if any easily available material would allow one to reach 100 km or other extreme height. Dragons flight (talk) 22:05, 6 January 2016 (UTC)
People have seriously thought along the lines of "building a stairway to heaven", though you wouldn't do it by building a mountain. A space elevator is a commonly-discussed idea; see non-rocket spacelaunch for others. --71.119.131.184 (talk) 22:40, 6 January 2016 (UTC)
Something that will work against any attempt to build such a mountain is isostasy - the lithosphere will be warped downwards under the load - that's why mountains have roots. On a very short timescale this may not matter too much, but over millennia it will become important. Mikenorton (talk) 22:53, 6 January 2016 (UTC)
What does that mean, “mountains have roots”? —Tamfang (talk) 04:39, 7 January 2016 (UTC)
Mike Norton is referring to the geology underneath large mountains. In order to "float" heavy rocks on top of liquid rocks, the mountains need to buoyantly displace some of that material, or else they would sink. Mountains abide by the same physical laws as everything else; we often just don't notice, because these processes occur very slowly (...rock is a lot denser and more viscous than, say, water).
Things get very complicated, because different parts of the crust and mantle have different geochemistry (and thus, different densities); and in some places, mountain uplift is a dynamic and unstable process (like in the Sierra Nevada Mountains, which will probably start sinking once they lose some of that upward momentum from their convective bobbing). They've been having their first upward bounce for a few hundred million years, so it might be a while before they sink.
In other words, when you see a mountain, you're only seeing the "tip of the 'berg."
Nimur (talk) 16:36, 7 January 2016 (UTC)
Thanks Nimur - see also Continental_crust#Forces_at_work. One of the counterintuitive results is that large mountains have large negative bouguer gravity anomalies (gravity field that has been corrected for topography), first noticed near the Peruvian Andes by Pierre Bouguer in the 18th century. Mikenorton (talk) 17:35, 7 January 2016 (UTC)
Nitpick: the mantle isn't liquid (as the article says); this is a common misconception. It's solid but plastic (in the original sense of the word, not the common modern meaning of "a man-made hydrocarbon material"). This distinction is important for understanding its properties. You are of course correct about the overall issues of buoyancy and displacement. --71.119.131.184 (talk) 07:16, 9 January 2016 (UTC)
Also, another thing worth discussing: even if we could magically create a mountain that reaches into space, if you want to stick things into orbit from the mountain's peak you still have to impart a bunch of energy to them. Objects wouldn't "float" into orbit if you released them from the peak; Earth's gravity is still pulling down on them. "There's no gravity in space" is a very common misconception, but it's obviously wrong (what keeps the Moon in orbit around the Earth?). To stay in orbit around Earth, you need to be moving really fast. You're weightless in orbit because you're in freefall; your orbital motion cancels out the gravitational pull of whatever you're orbiting. (Suggested resources for understanding this more: Newton's cannonball, , ) For a rocket used for an orbital launch, most of its fuel is used to impart sideways motion, not to make it go up. Proposals for non-rocket spacelaunch often revolve around a way to impart energy to the payload from an external source, instead of using fuel carried as part of the payload. This avoids "the tyranny of the rocket equation". --71.119.131.184 (talk) 00:08, 7 January 2016 (UTC)
Unless, of course, you could build a mountain that reached up to geostationary orbit...but that's so utterly out of the question as to not be worth considering. However, that is precisely the plan for the space elevator. What's interesting about that altitude is that after that point, the higher you build, the LIGHTER the structure becomes - which is why the space elevator doesn't need to be a tower - it's a cable that's kept in tension by nothing more than the fact that it's very long. SteveBaker (talk) 19:39, 7 January 2016 (UTC)
What about a "mountain" consisting of a pyramid or cone shaped framework, made of light but strong materials, rather than it being solid? ←Baseball Bugs carrots12:37, 7 January 2016 (UTC)
Using that idea... reduce weight by reducing materials. Make the bottom of the pyramid smaller and smaller and the sides steeper and steeper. Eventually, you would have a long cord - a space elevator. 209.149.114.138 (talk) 16:11, 7 January 2016 (UTC)
Bingo! ←Baseball Bugs carrots16:34, 7 January 2016 (UTC)
A lattice might remove most of the weight, but it also removes most of the strength. As it turns out these factors tend to more or less offset each other so that the limiting height of an open lattice construction tends to not be very different than building a solid construction of the same material. Dragons flight (talk) 16:52, 7 January 2016 (UTC)

Rules restricting use of centi, deci, deca, hecto SI prefixes.

I heared that SI has special rules restricting the use of centi, deci, deca, hecto SI prefixes, namely that unlike the other prefixes, these prefixes are recommended only for certain particular units of measure, and possibly for certain uses of those units. Where can I find a description of these rules? I'd prefer a description easily accessible on the internet. – b_jonas 21:47, 6 January 2016 (UTC)

Not sure where to list them, but they are mostly going to be "only use them where it is currently customary", like cm. StuRat (talk)
Maybe, but if I am wrong and such rules aren't available, I'd like at least a description of the customary use of those prefixes. The difficulty here is that the use of prefixes in everyday topics differs by location, and the use in professional contexts may differ by area of expertise. In particular, decagrams are commonly used in informal speech in Hungary to measure food items, such as meat products or cheese when bought in amounts smaller than 0.5 kg, but in some other countries it isn't used for such a purpose. Similarly, deciliters and centiliters are used in everyday speech about liquids. – b_jonas 10:25, 7 January 2016 (UTC)
I can't find any rule suggesting their usage should be restricted on the BIPM site. See e.g. . I did find (and other sites replicating the same thing) which say

The prefix hecto- to centi- are not 'preferred prefix' but referred to as 'other prefix' by SI .... Le Système International d'Unités (SI) name the prefix giga and nano, milliard and milliardth respectivly. The wording shown here was approved by the General Conference on Weights and Measures and has been adopted in practice.

But I can't find this or any reference to preferred prefix or other prefix anywhere on the BIPM site. So either it isn't on the site in English in searchable format (bearing in mind the official language is I believe French and it's possible some of the older stuff is still in images that may not be OCRed or may not be properly OCRed), my search is screwing up and there is something somewhere, or the wording above is confusing and the "preferred prefix part isn't coming from the BIPM or the CGPM. (The wording is also confusing because it doesn't discuss deca etc.) It's possible the wording used to be there, but was removed at some stage and I'm having trouble finding the resolutions where this happened.

You're correct their usage is uncommon in many areas of work and units, with some variance from country to country. (The bit you mentioned is partly mentioned in our articles like Deca- and litre. Actually the later article mentions the bit about their usage being discouraged, but it's unsourced.) National standards and other bodies and style guidelines may also have their own rules rejecting or discouraging the use of these prefixes. E.g. & Metric prefix (the part about building codes).

I also came across this interesting perspective with claims of centimetre causing problems in adoption.

Nil Einne (talk) 13:28, 7 January 2016 (UTC)

Edit: Probably should have incluided which appears to be the resolution where the prefixes were adopted.

You'll also see linked on that page this PDF which is the report/proceedings, in French of course, of the 11th conference where that resolution was adopted (page 87) if anyone wants to investigate further for any discussion of preferred prefixes (or whateve). That page also mentions the 1958 CIPM, I think this is the whole report in French , if anyone is interested in finding if there is any mentioned of preferred prefixes there instead.

Nil Einne (talk) 14:30, 7 January 2016 (UTC)

I'd like to mention that it's hard to get usage info about how spread these prefixes are because they're used in speech more frequently than in writing. This is not surprising: in speech, 25 decagrams or 35 decagrams is easier to say and understand than 250 grams or 350 grams, but in writing, 250 g or 350 g or 0.250 kg or 0.350 kg are easier to read than 25 dag or 35 dag or 25 dkg or 35 dkg. This is why people ask for 25 decagrams of cheese in the shop, but then the electronic weight scale prints a label with "0.250 kg" or something similar on that cheese. This applies to me as well: I often use decagrams and centimeters in speech, but rarely in writing. – b_jonas 16:08, 7 January 2016 (UTC)
Don't confuse the metric system with the SI system. The metric system is the full set of prefixes. The SI system is a subset of the metric system based around 7 base measurements for the seven defined measurements: length, mass, time, temperature, amount, current, and luminosity. Since all other measurements can always be a derivation of those seven basic measurements, you define units for those seven base units, and let the rest fall out; i.e. volume is length cubed, electric charge is current multiplied by time, energy is mass multiplied by distance squared and divided by time squared, and so on. The SI system only uses 7 units as their standards, and all except 1 are "base units", with no prefixes. The only SI unit with a prefix is the kilogram. There's an alternate system called the CGS system, which only uses centimeters as a prefixed unit, the rest are the base units. So, I think you're confusing the terms here. There's two different systems, one of which is a subset of the other:
  • The metric system, which is the full set of all possible measurements you can make, along with the full set of power of ten prefixes (hecto, giga, pico, whatever)
  • The SI system only uses the seven base units (meter, kilogram, second, kelvin, mole, ampere, candela). All other units must be some combination of those base units, known as the SI derived unit.
Other units are valid metric units, but not SI units. The centimeter is not an SI unit, the meter is. In volume, the SI unit of volume is the cubic meter (m), the liter is not an SI unit, because it is not a mathematical combination of the other units (it's a cubic decimeter, but decimeter is not an SI unit). Similarly we have two common metric units of pressure: The pascal and the bar. The pascal is in the SI system, because it can be reduced to SI base units, 1 pascal = 1 kg/(m*s). The bar is not, because it cannot be simplified to only SI base units (it's 100 megagrams/(m*s) I hope that clarifies things. --Jayron32 21:15, 7 January 2016 (UTC)
Mostly this is a matter of semantics, but while it's true that the SI base units don't have prefixes except for the kilogram, I would suggest the prefixes are part of SI and not just part of the metric system. They are mentioned in our International System of Units article. And our Metric prefix article mentions SI prefixes.

And for good reason, the prefixes are mentioned under the SI brochure published by the BIPM as SI prefixes.

And the resolution which originally adopted/defined "the system founded on the six base units above is called the "Système International d'Unités"" ("le système fondé sur les six unités de base ci-dessus est désigné sous le nom de « Système international d'unités") or "international abbreviation of the name of the system is SI" ("l'abréviation internationale du nom de ce Système est : SI") said "names of multiples and submultiples of the units are formed by means of the following prefixes" ("les noms des multiples et sous-multiples des unités sont formés au moyen des préfixes suivants") and then went on to define/name the earliest prefixes . (See above for more links including the conference proceedings etc.)

Nil Einne (talk) 10:11, 8 January 2016 (UTC)

The SI standard is available freely in PDF here. (This is the US edition, but that only means it uses American spellings and adds a few notes regarding recommended practice in the US. The content is the same as other editions.) There is nothing in it to the effect that any prefixes are more preferred than any others. As indicated above, there may be national or other standards that make such recommendations, but if so, they are not part of the SI. --76.69.45.64 (talk) 02:32, 8 January 2016 (UTC)

What's the smallest or shortest building that'd be measurably weaker if it didn't follow Earth's curve?

Inspired by the mountain question, I wonder the above. The Boeing factory is about 100 meters tall and a half mile square, did they build the walls "not parallel" because of the curvature of the Earth? Did they have to mathematically alter the shape of the roof of the Aalsmeer Flower Building for its vast square kilometer size? How long would a catenary arch on a spherical planet have to be for it to be measurably weaker than the best shape for a globe? Does this shape have a name, too? Sagittarian Milky Way (talk) 22:54, 6 January 2016 (UTC)

Not sure if you really have to alter your plans in building construction to account for the curvature of the Earth. That is, there is a certain amount of tolerance in every joint, and that may well add up to more than enough to counter the effect. For example, the I-beams at the top would be slightly farther apart than at the bottom, but that spread would just be in the location of the rivets. Each vertical beam is likely made to be "normal to the Earth" using a plumb-bob, rather than "parallel to the rest". StuRat (talk) 23:17, 6 January 2016 (UTC)

And one more: How heavy could a structure on strong, geologically stable bedrock in a tectonically dead place be before you start affecting the crust? What would happen if you exceeded the pounds per square inch level of whatever the strong, stable bedrock is made out of without screwing with the crust? Sagittarian Milky Way (talk) 23:10, 6 January 2016 (UTC)

Read geotechnical engineering to get an idea of what is involved. Graeme Bartlett (talk) 07:23, 7 January 2016 (UTC)
I don't know about buildings, but suspension bridges are built with towers that are vertical but are further apart at the top than at the bottom due to the curvature of the earth. For example, the Humber bridge has 155m (510ft) towers that differ by 34mm (1.3 inches) and the Verrazano-Narrows Bridge has 693ft (211 m) towers that differ by 1 and 5/8ths inches (41 mm). Widneymanor (talk) 11:14, 7 January 2016 (UTC)
Modern boreholes, for example the ones used in deep petroleum extraction, are an engineering marvel: drilling a hole through "solid rock" becomes a complex engineering challenge when the length-scales imply that the rock is not very solid! For example, drilling in salt is plagued by the fact that salt "flows" like a glassy liquidy mush (among its other fun behaviors). You won't see that effect in table salt - but if you were to try and cut a straight line through a few miles of mostly sodium-chloride, you'd see that your straight line starts to squish in all sorts of interesting ways as the overburden changes.
Along the same lines, petroleum and gas extraction are often accused of creating induced seismicity. This is not because the buildings on top are too heavy: it's because drilling wells and extracting fluids reduces pore pressure over a large volume. That effect can cause subsidence and even earthquakes.
Nimur (talk) 16:05, 7 January 2016 (UTC)
  • Does it really matter from an engineering point of view? What you can about in engineering is not if your walls are parallel, but if they distribute the load correctly, and deal with stresses correctly. If you are using standard simple engineering tools, things like plumb lines to detect true "down", then you're designing your stresses to be aligned with the earth's gravitational field, and not as perfect 90 degree angles anyways. I'm not an engineer, but it seems if you're making measurements based on physically checking against the earth's gravitational field (i.e. geodesy) then it comes out right in the end, even if it doesn't come out parallel. --Jayron32 20:56, 7 January 2016 (UTC)
  • Then how wide or tall would a building have to be for it to be detectibly unparallel (either with engineering-grade instruments or the best tool known to man (a gravity wave detector?))? How wide or tall would a hypothetical parallel building have to be to be measurably weaker than a real one of otherwise equal quality? Sagittarian Milky Way (talk) 21:27, 7 January 2016 (UTC)
The problem with this question is that you use the word "measurably" - so this is more a question of how sensitive our measurements can be than about the orientation of walls to the local vertical. How would we measure whether a building was "weaker" than it "should be" with any kind of precision? We don't generally measure the strength of buildings anyway - mostly we know the engineering parameters of the construction techniques used and we make sure there is ample safety margin. There are sometimes small-scale tests done in wind tunnels or on shaker tables (to simulate earthquakes) - but that's entirely impractical for any building large enough for the curvature of the earth to remotely matter.
So, no - the building won't be "measurably" weaker. We might ask whether it would theoretically be weaker - but that's a very different matter.
As others have said - builders routinely use plumb-bobs and spirit levels to get things straight and square - and those tools naturally ensure that walls are 'vertical' relative to the local gravitational direction (which might not be in the exact direction you'd predict from earth curvature anyway. Changes in underlying rock densities, nearby chasms or mountains - all of these things might result in the two side walls of a building not being perfectly parallel. The degree of difference between the width of a tall building at top and bottom is going to be a matter of inches at most. If you ever watch people on a building site measuring stuff - they are using long tape measures that are blowing in the wind, sagging in the middle, twisted and so forth - their errors are going to be much higher than errors due to the curvature of the earth. So these kinds of differences due to the earths curvature are probably comparable in scale to the normal construction errors. Architects must allow for reasonable measurement tolerances when they design the building - so it's really not meaningful to ask whether the building is weaker...it might easily be stronger - depending on small details of the design.
Because the building construction is continually checked with plumb-bobs and spirit levels - there isn't going to be any induced weakness - the walls will be vertical and the floors still horizontal, not by design - but by virtue of the instruments used to construct it. All that would likely result would be that the top floor of a very tall building would be slightly larger in area than the bottom floor in a building that's designed to be perfectly cuboid...but even that difference will likely be hidden by the fact that the load-bearing structures on the lower floors have to be built stronger to carry the weight of the upper floors.
For a very wide building - which would technically need a slightly curved roof and floors - the same kind of thing applies. As the roof and floors are constructed, small errors in the sizes of support columns and such will be checked using a spirit level - and the floors and roofs will naturally curve because of that.
You might argue that a building that has prefabricated steel beams or something might need special attention - but steel expands and contracts with temperature - and the building has to be designed with compensations for that - and that is likely to be more than sufficient to take care of any earth-curvature differences.
SteveBaker (talk) 14:12, 8 January 2016 (UTC)

Aversion to IQ tests

I noticed that I have a certain aversion to IQ tests so that I can't complete them and know my IQ. Out of 40-60 questions that an IQ test seems to contain I can't go beyond approximately 5th question (and certainly not beyond 10th). After first initial tests, as they get more complicated, I'm like "f*ck it" and quit, as I can't force myself to think further. Do some other people have the same issue or was it mentioned somehow before? Thanks.--93.174.25.12 (talk) 23:44, 6 January 2016 (UTC)

Certainly people with test anxiety have difficulty taking tests, but because of reference desk policy (see the top of the page) we cannot diagnosis any particular reason or reasons why you have had difficulty taking an IQ test. Hellmari (talk) 00:12, 7 January 2016 (UTC)
I agree with the above. What could possibly be relevant here is if there WAS any specific codified considerations for people who have trouble completing the "standard" test, but that almost certainly would not be part of any "online" IQ test which the vast majority of are not "official" in any way. You'd probably need input from someone who actually works with "official" tests whether they have special rules for people with attention disorders and such things, maybe they allow more time, or allow short breaks between every 5 questions (that's just speculation). It would seem to me that while this might not be "common" surely it would have to be common enough. Vespine (talk) 00:22, 7 January 2016 (UTC)
You need to figure out a way to make answering questions "fun" for you. I'd venture a guess that a large percentage of ref desk regulars enjoy taking such tests. ←Baseball Bugs carrots01:37, 7 January 2016 (UTC)
Intelligence quotient is usually measured by a standardized test. Omitting answers - for any reason, including boredom - contributes to the correct scoring of the test. IQ absolutely corresponds to your ability to focus; and whether this makes you happy or not, if you can't focus, you probably have a lower IQ than somebody of otherwise equal capability who can focus well. The standardized test format test is designed to include that dimension in its scoring.
If you are taking a test like the SAT, where omitted answers are unscored, you can obtain a base-level score even if you omit every question after the first few. But: you probably will not like the score you get: you can't expect to turn in a blank test card and receive a high score. (If you're interested in a discussion about treating SAT score as a correlate of IQ, or as a general intelligence test, it came up on this desk in October 2015).
Some researchers term this effect "mismeasurement" - for example, from our article on Attention-Deficit Hyperactivity Disorder, I found this 2008 article. But, for all these researchers calling this "mismeasurement," there are many more psychometrics research publications calling it "correct measurement." If you can't perform well on an IQ test, your IQ Is lower than somebody who can. This is the operational definition of IQ; it's why we can use IQ to measure the effects of, say, hypoxia on aviators, or sleep-deprivation on students, or the effects of trauma on soldiers, and so on. There are all sorts of confounding factors that affect focus. Here is a wonderful piece of quantitative psychometric research: Effects of Hypoxia... (1997), in which test subjects performed the MATB test battery with different oxygen levels. Amazingly, being a smoker has an incredible negative effect on your ability to focus - perhaps stronger than the effect of hypoxic hypoxia! These effects adversely impact test scores on standardized tests. So, why should any effects caused by your personality or behavior get a free exemption?
As I always like to remark when this topic comes up: not all psychologists believe that psychometrics is a relevant approach. This means that some psychologists discount the importance of standardized testing.
Also: almost any internet-based free "test" is not an IQ test. Internet-based tests are usually very poor quality - they use invalid testing methods, and often use poor quality questions and methodology. Do not treat "free web-based tests" as IQ tests. They are not the same at all.
Nimur (talk) 15:16, 7 January 2016 (UTC)
In this case the IQ test is doing what it should do. IQ tests were developed to help predict how well a person would do in a structured school environment. Those with a higher IQ would do better and go further in education. Those with a lower IQ would do worse. In your case, you admit that you fail at structured exams, which are a requirement for nearly all structured education systems. So, you would do poorly in a structured education system and, per the IQ test result, you should receive extra resources to help you with your education. In my opinion, the notion that high IQ equates to high intelligence has made this a difficult subject to discuss. High IQ simply means that you have a tendency to better in a structured education system. If you begin with that understanding, then having difficulty taking the IQ exam should make perfect sense to you. 209.149.114.138 (talk) 16:03, 7 January 2016 (UTC)
I agree - the definition of "IQ" is "Your score on an IQ test" and if you don't score as well - then your score is your score regardless of the reason why.
The problem here is that our society has conflated "IQ" with actual intelligence - or "worth" to society - or some other damned thing - and that's just stupid. An IQ score measures your ability to do an IQ test - and nothing more. So, if you can't cope with the test and end up with a lower score than you think you're worth - then...well...you DO have a lower IQ. The problem is not that you wound up with a lower score than you hoped - the problem is that you (and others) tend to misinterpret the number as having some kind of importance to them.
That said, there are studies that show that people who do better at IQ tests (and hence get a higher IQ score) are statistically able to earn more money than someone who is less good at doing IQ tests - and to that degree, the IQ score does predict how well people do in the world. However, it's only a statistical relationship...it's not always the case that high IQ people earn a fortune or that people who earn a lot of money have a higher IQ. For a particular individual, you can't say "This person earns a lot of money because they have high IQ"...it could be for any of a million other reasons.
What we don't know is whether people with a lesser IQ score have a shorter attention span (and so get sick of doing the increasingly painful questions) - or whether it's because they simply don't have the intellectual capability to solve the harder puzzles - or whether their brains are better at intuitive reasoning rather than logical reasoning - or they were sick on the day they took the test. You can somewhat check that for yourself by taking a test and doing (say) only every third question. If you get further through the test before starting to fail - then your problem is really attention span - but if you still don't get very far because the puzzles start to get too hard - then perhaps you are less able at logic/reasoning challenges.
The real world is much more complicated than can be tested that easily - and certainly more multi-dimensional than could possibly be expressed in a single number. My IQ score is pretty good (probably because I actually enjoy doing the puzzles in IQ tests rather than being in some way "superior"). BUT just about anyone can beat me at playing chess - which is commonly considered to be a game that requires a lot of intelligence and which appears (at first sight) to depend on the same kinds of logical thinking as IQ tests. If we used "CQ" (chess-quotient - I just made that up) rather than "IQ" as our standard metric of how smart people are, I'd be in the 70's rather than the 170's...but I'd still be the exact same person - and who is to say that IQ is more or less valuable than CQ?
So in the end - don't sweat it. Your IQ score doesn't matter a damn - and anyone who says otherwise is wrong. SteveBaker (talk) 17:55, 7 January 2016 (UTC)
Chess is a unique game: a very small number of early game permutations exist, and by rote memorization of those openings, a player can develop a very strong advantage in the middle and late game. What this means (to me, anyway) is that the game of chess has a very strong bias towards individuals who have played a lot of chess, rather than to individuals who are very good at logical thinking. This is one reason why chess rating is not correlated strongly with other things, like IQ. Rote memorization of a specialized skill - one that can be improved by practice - is actually something that is typically excluded from definitions of the general intelligence factor.
A handful of chess enthusiasts have put forward one relation or another to compare chess skill and g: for example, Levitt's equation, by one chess author, converts from Elo rate to IQ; but it's not a scientific result - it's just one guy's opinion, and he's not a psychology expert; nor are his ideas peer-reviewed by other psychology experts. Besides, we've already discussed some of the flaws of the Elo rating scheme: it is not a standardized test. At best, its analogous to "grading on a curve," and in that respect, chess rating does have some similarity to IQ normalization; but it's also a very unstable metric that depends heavily on who else is playing, and in what order they're playing their games.
Nimur (talk) 19:52, 7 January 2016 (UTC)
Well, yes - but don't you think that an IQ test is also "a unique game"...and it's been found that practicing helps you do better (but not by much). The question is: "What do you mean by 'intelligence'" - and then "To what purpose do you intend to put the number once you've measured it?" - neither of which are very well defined at this point. SteveBaker (talk) 22:02, 7 January 2016 (UTC)
Based on your description I suspect you're talking about free online "IQ tests". A lot of them give most people a high score in the hope that they'll pay for a paper certificate showing their score. Real IQ tests are proctored and cost money, and the questions may not be very similar to the questions in fake online tests. -- BenRG (talk) 00:18, 9 January 2016 (UTC)

January 7

Why does   80 10 6 = 8 10 5 {\displaystyle \ 80\cdot 10^{-6}=8\cdot 10^{-5}} ?

I'm trying to understand why   80 10 6 = 8 10 5 {\displaystyle \ 80\cdot 10^{-6}=8\cdot 10^{-5}} ? What is the explanation? 92.249.70.153 (talk) 01:13, 7 January 2016 (UTC)

Not sure if this will enlighten you, but (1) Number a x number b = (ten times smaller number a) x (ten times larger number b), or (2)   80 10 6 = 8 10 1 10 6 = 8 10 5 {\displaystyle \ 80\cdot 10^{-6}=8\cdot 10^{1}\cdot 10^{-6}=8\cdot 10^{-5}} Clarityfiend (talk) 01:43, 7 January 2016 (UTC)


Compare a simplified case:   80 10 2 = 8 10 1 {\displaystyle \ 80\cdot 10^{-2}=8\cdot 10^{-1}} . Do they look equal to you?--Denidi (talk) 01:45, 7 January 2016 (UTC)
Which is another way of saying 80/100 = 8/10, or 80 x 0.01 = 8 x 0.1. Clarityfiend (talk) 01:54, 7 January 2016 (UTC)


(ec)You should review Exponentiation#Negative_exponents. Here's one way to look at it. Pardon the Fortran-style notation:
80 * 10**-6
= (80 * 1/10) * 1/10 * 1/10 * 1/10 * 1/10 * 1/10
= (8) * 1/10 * 1/10 * 1/10 * 1/10 * 1/10
= 8 * 10**-5
Baseball Bugs carrots01:55, 7 January 2016 (UTC)
N.B. The {\displaystyle \cdot } is the dot product.
Sleigh (talk) 01:58, 7 January 2016 (UTC)
While the dot sign can be used for the dot product, it can also be used to denoted regular multiplication, as explained in the first bullet point under Multiplication#Notation and terminology. Usage is determined by context, and in this case, where neither multiplicand nor multiplier are vectors, it is clear that regular multiplication and not the dot product is intended. -- ToE 07:27, 7 January 2016 (UTC)

Also see scientific notation. Bubba73 02:52, 7 January 2016 (UTC)

Perhaps the negative exponents are getting you confused. So, let's look at an example that uses only positive exponents:   7 10 3 = 70 10 2 {\displaystyle \ 7\cdot 10^{3}=70\cdot 10^{2}} . In this example, both expressions mean 7,000. The first expression means 7 times 1,000 and the second expression means 70 times 100. Which both mean 7,000 as a final product. Your example with negative exponents is in the same vein. Joseph A. Spadaro (talk) 08:14, 7 January 2016 (UTC)
Also, a suggestion. You might be better served at the Math Help Desk. Joseph A. Spadaro (talk) 08:17, 7 January 2016 (UTC)
The simplest non-mathematically intense way to explain this is that the exponent on the 10 tells you "Move the decimal point that many places to the right" (or to the left if it's negative). So if you take 80.0 and move the decimal point left 6 times, you get 0.00008 - and if you take 8.0 and move the point to the left 5 times, you get 0.00008...which gives you the exact same result. It is common to use this notation in one of two ways - either:
  1. Choose to make the exponent always be multiple of three (which makes it easier to mentally call it "thousands", "millions", "billions" and so forth - and also to use SI units with "milli-", "micro-", "nano-" and "pico-")...hence 80x10.
  2. Or choose to always keep exactly one digit to the left of the decimal point at all times...hence 8x10.
When you mix those two conventions (as presumably happened in your example), the arithmetic is no different, the meaning is exactly the same - but everything becomes much more error-prone for us mere humans. SteveBaker (talk) 17:27, 7 January 2016 (UTC)
Or to state what you said more formally: 80 = 8 10 1 + 0 10 0 {\displaystyle 80=8\cdot 10^{1}+0\cdot 10^{0}} in base ten. 0 10 0 = 0 {\displaystyle 0\cdot 10^{0}=0} of course; I included it for illustration. Thus, by the commutative property of multiplication and the identities of exponentiation, 80 10 6 = 8 10 1 10 6 = 8 10 1 6 = 8 10 5 {\displaystyle 80\cdot 10^{-6}=8\cdot 10^{1}\cdot 10^{-6}=8\cdot 10^{1-6}=8\cdot 10^{-5}} . Remember, in base ten, every numeral "place" represents a power of ten. --71.119.131.184 (talk) 21:16, 7 January 2016 (UTC)
The fact that 80 = 8 10 1 + 0 10 0 {\displaystyle 80=8\cdot 10^{1}+0\cdot 10^{0}} is a confusing red herring, and is not helpful to the explanation. Dbfirs 23:53, 7 January 2016 (UTC)
Thank you all, I got it thanks to your explanations. I appreciate your help. 92.249.70.153 (talk) 06:55, 8 January 2016 (UTC)

Carbon monoxide

Is there any Misplaced Pages article about mass carbon monoxide poisonings? In particular, there was one steam train that got stuck in a tunnel and the smoke killed everyone -- anyone happen to know when that happened? 2601:646:8E01:9089:F88D:DE34:7772:8E5B (talk) 05:39, 7 January 2016 (UTC)

Thanks! That's what I'm looking for. (Actually, I've already found it, but thanks for the effort anyway!) 2601:646:8E01:9089:F88D:DE34:7772:8E5B (talk) 13:03, 7 January 2016 (UTC)
Just to be helpful, do you want me to add the Myojo 56 building fire to Category:Deaths from carbon monoxide poisoning? 2601:646:8E01:9089:F88D:DE34:7772:8E5B (talk) 13:11, 7 January 2016 (UTC)
Good idea! But it's already in there. DMacks (talk) 16:37, 7 January 2016 (UTC)
What's also surprising is the number of Carbon Dioxide poisoning deaths...Lake Nyos makes interesting reading. SteveBaker (talk) 17:18, 7 January 2016 (UTC)
Actually, there's quite a few more mass deaths from carbon monoxide poisoning, besides Balvano and Myojo 56 -- in fact, many mining disasters involve mass carbon monoxide poisoning. (I've added the Senghenydd Colliery Disaster to the category, for the sake of accuracy.) 2601:646:8E01:9089:F88D:DE34:7772:8E5B (talk) 01:02, 8 January 2016 (UTC)
Getting back to Steve's comment about CO2 related deaths, I ran a search, and found this paper by the British health and safety executive on the potential for mass CO2 casualty incidents as carbon is trapped from industrial smoke stacks and other sources to mitigate global warming. Which begs the question "Why not capture the CO2 as carbonate?" Carbonates are nice, tractable, industrially-useful solids. The only real way to hurt yourself with some is to drop a 50-pound sack of it on your foot. loupgarous (talk) 07:42, 10 January 2016 (UTC)

Microwave oven entrance plate?

I have a microwave oven. It is 20 years old. It started throwing sparks and shooting small flames. The culprit was a small plastic (?) 'window' the goes over an opening inside the oven. There is a channel that conducts microwaves from the cavitron to the inside of the oven. This window/panel covers the place where this channel enters the interior of the oven. A fingernail sized portion of this cover had become burned, turned black and bubbly.

What is this cover made of? Why is it there? Why did it catch fire? Cpergielx (talk) 19:45, 7 January 2016 (UTC)

It's made of plastic or cardboard. My microwave oven has one made of cardboard. It's there to protect the cavitron. Plastic and cardboard are mostly transparent to microwaves, so the microwaves from the cavitron can pass through into the oven chamber, but matter will be blocked. Yours might have gotten contaminated with something, like oil or something metallic, that absorbs microwaves. --71.119.131.184 (talk) 23:57, 7 January 2016 (UTC)
I read that some part of the microwave has a lifetime of about 2000 hours so maybe after 20 years it's due replacement anyway. I know the ones at work have gotten way less powerful over the (four) years. --78.148.110.91 (talk) 05:43, 8 January 2016 (UTC)
It's the waveguide cover. It may be made of mica. What you described seems to be a common problem: . --Amble (talk) 09:16, 8 January 2016 (UTC)

Dignitas death experience

According to the WP article, the method used by Dignitas puts people in a coma and takes about 30 minutes for them die. Does this mean they are liable to experience some kind of death dream like that in Jacob's Ladder? I know we can't know exactly what a dying person experiences but what can be inferred from people who came out of comas? --78.148.110.91 (talk) 23:38, 7 January 2016 (UTC)

Linking: Dignitas (assisted dying organisation).--Scicurious (talk) 00:02, 8 January 2016 (UTC)
Pentobarbital is humane enough for executions, at least in Texas. That does not mean the manufacturers would sell it for this purpose.
Regarding the experience of this suicide method: I don't know how this compares to the experience of coma patients. Nor do I know whether they will see their life flashing before their eyes. I'd rather compare this to a barbiturate overdose or a heroin overdose. Some people came back from that and can tell the story.
I also wonder why they don't use an elephant dose, and shorten the process to some seconds. Is that for the Jacob's Ladder effect? --Scicurious (talk) 00:22, 8 January 2016 (UTC)
For oral administration, it can already be difficult to swallow the necessary dose, according to this interesting article. And remember that a lot of terminal illnesses can interfere with functions like swallowing. Intravenous administration is another option, but of course then you need to insert an IV line, and I suspect there might still be an issue with administering large doses. People who receive large amounts of IV medication often have PICC or "central lines" inserted for that purpose. --71.119.131.184 (talk) 03:27, 9 January 2016 (UTC)
Maybe? Near-death experience is probably the article you want. Just for general interest, according to the article they use pentobarbital; barbiturate overdose is a fairly standard euthanasia/assisted suicide method. --71.119.131.184 (talk) 00:24, 8 January 2016 (UTC)

Follow-up question: any reason why they don't use nitrogen gas? That should at least be quicker? It sounds pretty mental but I'd probably prefer near instant mechanical destruction - I wouldn't want any neurones in contact with each other - it's the only way to be sure! 78.148.110.91 (talk) 01:55, 8 January 2016 (UTC)

The article on the organization says they've used helium gas asphyxiation in the past (inert gas asphyxiation is our article on the method). This verges into speculation (which is discouraged on the Reference Desk), but I think it's probably because it's more involved. You need the gas, breathing equipment, and you need people trained in using it, plus you need to ensure ventilation for the gas so it doesn't build up, which could kill people inadvertently. Administering a drug orally or intravenously is just simpler. --71.119.131.184 (talk) 03:27, 9 January 2016 (UTC)
close off-topic diversion I started, if no one minds -Medeis
The following discussion has been closed. Please do not modify it.
Do you mean the experience after you woke up, or during the procedure? Scicurious (talk) 02:18, 8 January 2016 (UTC)
@ μηδείς "... and was clinically dead for a few minutes" Do you mean with no cardiac or cerebral function? Was this an unforseen occurence? Did the 'horrific' part occur as you went into this state or as you came out? Richard Avery (talk) 07:47, 8 January 2016 (UTC)
"Full recovery of the brain after more than 3 minutes of clinical death at normal body temperature is rare." according to Clinical death.Abaget (talk) 09:42, 8 January 2016 (UTC)
Apparently, Medeis resuscitated. Scicurious (talk) 15:57, 8 January 2016 (UTC)
The doctor couldn't find my pulse for two minutes, I wasn't "declared" dead, so I really shouldn't have said clinically dead. The surgery was completed, and the horrible part was the coming to, although going under wasn't pleasant either. Like swimming up from the black depths with no air in my lungs. I'll have to ask my dad about the circumstances, since the dentist actually recruited him to help revive me. μηδείς (talk) 17:16, 8 January 2016 (UTC)
"the dentist ... recruited him to help revive me" Dude, your daughter ikind of died few minutes ago, if you have a sec to revive her, that'd be nice.

CO2 filter that let's O2 through

Could a mask incorporate a CO2 filter, but let O2 through? After all, O2 is a smaller molecule. --Scicurious (talk) 23:52, 7 January 2016 (UTC)

Yes, rebreathers do just this. However they don't filter molecules based on their size. They use principles of chemistry, such as using a chemical that reacts with carbon dioxide but not oxygen. Atoms are tiny. Making filters the size of small molecules like carbon dioxide is something we can't really do very well at present. Now the neat thing is there are quite a few such filters, but they're not man-made. Ion transporters and ion channels in cells often work on the basis of atomic size, as well as other things like electric charge. In the future it's possible we might wind up using bioengineered filters made of human-designed enzymes. --71.119.131.184 (talk) 00:04, 8 January 2016 (UTC)

Zeolites are used to do something similar. http://pergelator.blogspot.com/2010/07/oxygen.html Cpergielx (talk) 05:53, 8 January 2016 (UTC)

Because of Graham's Law of effusion, every filter will differentially let through more O2 (MM = 32 g/mol) than CO2 (MM = 44 g/mol). That is, if I had a bag made of a material which which slightly permeable to gas (really tiny holes, slow leak, etc.) and filled it with, say, a mixture of CO2 and O2, as the bag slowly deflated, the O2 would leave faster than the CO2, so slowly the relative concentration of CO2 inside my bag would go up, and O2 would go down. This is true regardless of the pore size, and is based only on the root-mean-square speed of the molecules at a given temperature: At the same temperature, the lower mass oxygen molecules are moving faster, so at the same temperature and pressure, more oxygen molecules "hit" a hole in a filter than carbon dioxide molecules. Even if the holes are many billions of times larger than an individual molecule, in the bulk, oxygen will always leak faster than carbon dioxide. While it is true this does not meet the requirements of a perfect filter (letting ONLY oxygen through and NEVER carbon dioxide), with any permeable membrane, on the balance oxygen will always pass through the membrane more readily than carbon dioxide. --Jayron32 16:19, 8 January 2016 (UTC)
Couldn't you use a zeolite filter to do some of the filtering? Graham's law also assumes non-turbulent diffusion I think. Yanping Nora Soong (talk) 22:21, 8 January 2016 (UTC)

January 8

argh! what is the standard industry term for this kind of plug?

I'm looking at this 12 V DC 1 A power supply which takes in AC input. Critically, the product features this tiny accessory which converts the "laptop-style" 12V DC plug into two +/- screw terminals. What is the standard industry term for this type of accessory or plug? (The inverse version -- male barrel plug, female screw terminals is here ) My main motivation is to power this 12V vacuum motor to help with Buchner flask extractions. However, I don't know how I'm supposed to solder a 12V connection onto a screw? I'm just trying to find out more about these "screw terminals". Yanping Nora Soong (talk) 00:50, 8 January 2016 (UTC)

The screw terminals are designed to screw down onto a pair of wires. They are not designed to solder. There would be different power sockets designed to solder to a board. Graeme Bartlett (talk) 01:47, 8 January 2016 (UTC)
What's the industry term for this kind of terminal? I still can't find articles for it. More importantly, how do I secure wires onto the screw? Do I have to tie down the individual filaments on the wire around the screw? That seems kind of hard. Yanping Nora Soong (talk) 04:22, 8 January 2016 (UTC)
The type of connector that you called "laptop-style" is called a coaxial power connector. The "accessory" you referred to is an adapter converting one type of connector to another. You can see the screw terminal end of the adapter more clearly in the pictures on these pages: and . --98.115.39.92 (talk) 05:04, 8 January 2016 (UTC)
To use the screw terminals, undo the screw a bit. Insert a wire stripped for a short distance into the jaws, and then screw the screw down till it has a good grip on the wire. Do this also for the other wire. Graeme Bartlett (talk) 05:16, 8 January 2016 (UTC)
As no-one's linked it yet, the technical term is screw terminal. Tevildo (talk) 08:51, 8 January 2016 (UTC)

Does eggs contain (free) H2O?

I'm asking this question because always when I fry eggs I see vapor raises from the frying pan.92.249.70.153 (talk) 03:41, 8 January 2016 (UTC)

Yes. Our article, egg white § Composition cites a source, stating 92% of the egg-white (the liquidy part of a chicken egg) is water, by mass. Similarly, our article on yolk, § composition, states that the yellow part contains lots of fats and oils, by mass; but there are some trace other compounds; and about 50% of the yolk is water. Nimur (talk) 03:59, 8 January 2016 (UTC)
Half of the water in the yolk is water? ←Baseball Bugs carrots04:09, 8 January 2016 (UTC)
Sorry, I had a typo in my comment; I removed it. Nimur (talk) 04:26, 8 January 2016 (UTC)
Interesting, thank you. 92.249.70.153 (talk) 06:40, 8 January 2016 (UTC)

World's lowest maximum voice

I was trying to search on bing about the world's quietest maximum voice but I couldn't find it. I'm thinking that the world's weakest maximum voice would be so feeble that it can barely hear it even when putting one's ear right beside person's mouth in a quiet room. In order for the person who has that feeble voice to be heard when talking verbally to people from a distance, it would need a microphone or an amplifier that can put in the mouth that can amplify the voice a thousand times to have the normal vocal range in loudness. PlanetStar 04:38, 8 January 2016 (UTC)

Is there a question? Double sharp (talk) 10:17, 8 January 2016 (UTC)
I just commented about it to see how anybody would think. PlanetStar 00:39, 9 January 2016 (UTC)
You may pursue your search in Misplaced Pages, starting at the links shown in List of language disorders. Dysphonia is the medical term for disorders of the voice, distinct from Aphonia where speech is impossible. Voice pedagogues define voices in terms of their musically useful Vocal range of singing pitch, which for a given singer differs depending whether the context is opera where singers must project over an orchestra without the aid of a microphone, or pop songs delivered in a recording studio. Two extreme voice modes are shouting where a 129 dBA audio level was set by a human in 2000, and Muteness the inability to speak which can be due to a variety of Speech disorders. Since anyone can deliberately reduce their speaking volume it seems unreal to identify anyone as posessing the world's quietest maximum voice. A typical electric Megaphone can provide 1000x sound power amplification i.e. 30 dB (decibels). In a crowd address situation this can enable each of 1000 listeners to hear the speaker as though they were about 1 m distant (see Sound power, Sound pressure). AllBestFaith (talk) 11:27, 8 January 2016 (UTC)
I was thinking that many cases of faint maximum voices are caused by voice disorders. PlanetStar 00:39, 9 January 2016 (UTC)

Is it true that ethnic Swedes will be a minority by 2050?

Read somewhere else that it's by 2041. 2.102.185.25 (talk) 05:12, 8 January 2016 (UTC)

They are a minority now - depending on whom you count, there are between 8 and 13 million Swedes, out of about 7.3 billion humans. --Stephan Schulz (talk) 08:54, 8 January 2016 (UTC)
I was thinking to ask the OP, "In which country?" It might be the case that there are more "ethnic Swedes" in America than in Sweden. ←Baseball Bugs carrots09:04, 8 January 2016 (UTC)
Swedish Americans says there are 4.3M Americans of Swedish ancestry, which is definitely less than the 7.6M Swedes in Sweden. Dragons flight (talk) 09:32, 8 January 2016 (UTC)
I believe the poster is referring to the anti-immigration rhetoric in Sweden. From 2010 to 2014, the percentage of Swedish residents with "Foreign background" (defined as foreign-born or Swedish-born with two foreign-born parents) has increased from 19.1% to 21.5% according to official statistics . Alarmists extrapolate this 2.4% growth over four years (and similar statistics) to conclude that in several decades Sweden will be majority foreign. Of course, such extrapolations from a few years of data are rather crazy. I believe the official projections assume the foreign population stabilizes at around 25%. It is also worth keeping in mind that almost 2/3 of the "foreign background" residents have already become Swedish citizens. Dragons flight (talk) 09:24, 8 January 2016 (UTC)
I think the definitions are the toughest part of these things. At least in the U.S., black x white = black, Hispanic x anything = Hispanic. So unsurprisingly enough, the number of blacks and especially Hispanics steadily rises. Short of banning miscegenation there is no way to stop that. But it doesn't actually mean anything. Wnt (talk) 14:23, 8 January 2016 (UTC)
Likewise, in any country, you tend to find the ultra-right-wing nationalists tend to define "true X" as "All X ancestors since time immemorial; that is the xenophobic nutjobs who care about these things would say (in this case), "A true Swede is someone whose family has always been pure Swedish", as if you had to trace your ancestry back to Odin to be a "true Swede". See, of course, no true Scotsman for the problem with such thinking, along with the mythos that ethnicity is fixed and immutable over time. --Jayron32 16:09, 8 January 2016 (UTC)
Actually, there have been no true Swedes ever since the Vikings brought back Slavs and Celts and Anglo-Saxons to serve as thralls and polluting the blood line. And even Ragnar Lodbrok was originally from African stock. --Stephan Schulz (talk) 17:10, 8 January 2016 (UTC)
Lots of well-meaning obscurantism in this thread. Asmrulz (talk) 21:57, 8 January 2016 (UTC)
Actually, Dragons flight and Wnt seem to have answered the OP's question with no obscurantism at all. Ethnic Swedes are only a majority in Sweden now, as Dragons Flight pointed out. And wnt brought up the parallel issue here in the United States of when Caucasians cease being the majority ethnic group. He's absolutely right that it doesn't mean anything - except, of course, for those on the right and left political wings here for whom racial identity is more important than every one of us being an American. That includes the many, many people you see in Denver driving around in pickup trucks on Cinco de Mayo with huge Mexican flags in the hands of several inebriate friends. loupgarous (talk) 06:56, 10 January 2016 (UTC)

Does the stroke volume of the right ventricle equal to the left ventricle (heart)?

I have heard many opinion and I would like a reliable source about that. Thank you 92.249.70.153 (talk) 06:39, 8 January 2016 (UTC)

Our article on stroke volume defines the term as "the volume of blood pumped from the left ventricle of the heart of the heart per beat." So the term "stroke volume" has no meaning applied to the right ventricle's output. I was medical writer for a cardiology research clinic for a few years, and this is the same understanding I had working with several cardiologists and technicians who dealt with the concept of stroke volume many times a day. loupgarous (talk) 07:12, 10 January 2016 (UTC)

Does the right and left ventricle have the same BPM?

I'm reading the article Ejection Fraction here, and I got confused when I saw this table. It's not clear for me (when I see the information on this table) if the right and left ventricle have the same BPM, because it's written there that the right side has 75 bpm while the left side has 60-100 bpm (there is no source) and it doesn't make sense to me because according to what I know the impulse of the sinus make a depolarization for the both of the ventricles at once. I would like to check it out with you, thank you 92.249.70.153 (talk) 06:52, 8 January 2016 (UTC)

The mean volume must be close to identical because the same volume must pass through the two halves of the same circuit. This does not meant that each stroke must be identical. There will also be slight differences in the mean volume as some plasma leaves the blood circuit as lymph and does not get reintroduced identically. If one ventricle comes up a bit short on one stroke, the resultant blood volumes in the vessels will tend to help balance the mean volume on the next stroke through preload and afterload effects.
The mismatch in beats per minute on the table in ejection fraction may be a difference in measurement conditions, not a mismatch in an individual's ventricular rates (which as noted above, would make no sense). I'd have to dig into the table to figure out the details. BiologicalMe (talk) 16:47, 8 January 2016 (UTC)

Nobel prize nominee but never won

Can I get a list of people who were nominated for Nobel prize, but don't have an article yet. Is there any website which can show the persons who are capable of getting Nobel Prize in future? --Marvel Hero (talk) 07:24, 8 January 2016 (UTC)

I seem to recall, from a discussion here a good while back, that nominees are not officially known to the public. So any information on the subject would be anecdotal or speculation. ←Baseball Bugs carrots07:27, 8 January 2016 (UTC)
Not exactly. The records of the Nobel Committee are released 50 years after the fact. So one can browse the nomination archive . I don't know of any lists that explicitly state nominees that never won, but one could make such a list for the time periods when records have already been unsealed. Dragons flight (talk) 07:49, 8 January 2016 (UTC)
That doesn't answer the second half of Marvel Hero's question, because that asks "Is there any website which can show the persons who are capable of getting Nobel Prize in future?" Peter Higgs is the only person I'm aware of who went anything like 50 years from publication of the work for which he earned the Nobel Prize (1964) to actually getting the prize (2012). I'm skeptical that such a website (of potential future Nobel laureates) exists. loupgarous (talk) 08:05, 10 January 2016 (UTC)

Science awards

After Nobel prize, which science awards are most internationally recognized? I need general views, not perfect rankings. --Marvel Hero (talk) 07:46, 8 January 2016 (UTC)

From the fields that are not covered by Nobels, I'd say the Fields Medal for mathematics and Turing Award for computer science. We do have List of prizes known as the Nobel of a field. --Stephan Schulz (talk) 13:04, 8 January 2016 (UTC)
Stephan Schulz your answer was helpful. And I would also like to know the others which are reputed but next to Nobel, including Physics, Chemistry, Biology, Medicine. As FIFA World Cup is the best among football. After that comes UEFA European Championship. Likewise I want to know about those awards given to scientists which are considered second-best after Nobel prize (any field). --Marvel Hero (talk) 13:23, 8 January 2016 (UTC)
I find your lack of faith in Copa América disturbing. :)Naraht (talk) 22:10, 8 January 2016 (UTC)
I would mention the MacArthur Fellows Program as one of the more well-known recognition programs that covers broad topics (not just science though). Dragons flight (talk) 13:13, 8 January 2016 (UTC)
You can also locate the more prominent societies and organizations: for example, representing Physics in the United States is the American Physical Society, who host a series of prizes, honors, and awards. The same applies to other fields and other geographic regions. There are also more broad organizations, like the American Academy of Arts and Sciences; induction into AAAS as a fellow is a prestigious opportunity offered only to a select few researchers and scholars.
There are also political awards, like the National Medal of Technology and Innovation and even the Presidential Medal of Freedom, or the wide array of equivalent honors bestowed in other countries. These types of awards are widely publicized and well-known; but they are arguably less an accolade of pure technical accomplishment, and often connote some accomplishment that benefits society at large.
More locally, my old school confers the honorary title, Engineering Hero, to a very select few contributors whose accomplishments have "profoundly advanced the course of human, social and economic progress through engineering." I'm not too sure how widely known this accolade is, but the recipients have usually been world-famous! You can probably find similar awards, ranging from medals to titles to honorary doctorates. At what point does such an accolade qualify as "internationally-known"? Does "international repute" necessarily entail using a lot of money as the prize-incentive? If so, even a third-tier venture-capital awardee probably has received more prize-money than the Nobel in recognition of some quasi-technical accomplishment!
On the whole, Nobel Prizes and Fields Medals are the big ones - those are recognized widely by people who aren't scholars or subject-matter experts. All the other awards that have a lower profile tend toward enhancing recognition within a particular audience.
Nimur (talk) 14:47, 8 January 2016 (UTC)

hazards of abandoned buried electrical cables

With respect to this article, can anyone figure out whether the abandoned electrical cable was still energized, and if not, how it caused such an explosion? —Steve Summit (talk) 15:23, 8 January 2016 (UTC)

The article, Salty Brine State Beach, links to one news story, from the Providence Journal, July 2015, that says the electrical cable was deenergized and unrelated to the explosion. This isn't conclusive to me, but it may point toward better sources or investigation reports.
The news report follow-ups cite statements and reports from the Rhode Island Department of Environmental Management. That's where I would look for more documentation. The present theory is that copper, in contact with salt water, somehow corroded and evoked hydrogen gas, which became trapped and eventually ignited. This theory leaves room for more questions, but at least you have a few places to look.
More sources:
Something still has to provide ignition, in order to complete the fire triangle; and something has to provide current for electrolysis in order to produce hydrogen in the first place; but metal in contact with salt water is in itself a Galvanic cell, so that really might be all there is to this story.
Nimur (talk) 15:43, 8 January 2016 (UTC)
Thanks. I'm having a hard time believing that enough hydrogen could have been, er, generated to cause that big an explosion, but (a) stranger things have happened and (b) my intuition about anaerobic chemistry is nil. —Steve Summit (talk) 15:57, 8 January 2016 (UTC)
The key is trapped gas - a low concentration of hydrogen becomes a higher concentration over time; and then, you have a confined combustion, which turns a deflagration into a detonation. According to the report hydrogen will burn at 4% in air, and detonate at 20% concentration - at ordinary atmospheric pressure. When trapped, in an unknown environment with unknown mixtures of other materials, with unknown total- and partial- pressures, we can only guess: hydrogen detonates at a concentration somewhere between "0%" and "100%".
It takes very little fuel to produce a massive explosion. The keys to powerful explosions are oxidizer and confinement. This is one of many good reasons why certain other reference desk contributors, who shall remain unnamed, really need to stop playing kitchen-chemist in their apartment building.
Nimur (talk) 16:01, 8 January 2016 (UTC)
Well, that depends on what you mean by "massive" and also by "powerful". The maximum energy release is limited by the amount of fuel, at least. --76.69.45.64 (talk) 20:47, 8 January 2016 (UTC)
Indeed, but my point is that a very small amount of fuel can carry a very large amount of chemical energy. Usually, you don't get an instantaneous and complete combustion reaction; but if conditions are right, and the stoichiometry is balanced, the amount and rate of energetic release is qualitatively different: this is the difference between a fire and an explosion. If all the stored energy is released suddenly and quickly, a very tiny quantity of fuel is enough to send things flying. Have a look at the quantities of TNT used in terrible weapons like hand-grenades: often, these weapons need less than an ounce of explosive.
In my younger days, I used to volunteer for an educational Chemistry Magic trick show for little kids. We would blow up hydrogen balloons, and hydrogen-oxygen mixture balloons, and set them off to show the grade-schoolers why they should pay attention in science class. The pure hydrogen balloons made for a neat firework - sort of like burning flash paper. When we mixed Hydrogen and Oxygen, using the exact same quantity of fuel in the presence of stoichiometrically-balanced oxidizer, the pressure front could blow out the window glass from well-constructed rooms. So, one party-balloon's worth of H2 is capable of releasing enough thermal energy to cause serious permanent damage, if the combustion goes right. Moral of this story: don't underestimate fuel energy content; don't mess with strong oxidizers. And, unless you have a real need to use it, stay away from hydrogen, too - it presents unique hazards. It's not easy to develop intuition about the size of a reactant - particularly, a gaseous one - and the size of the boom it's going to make: so err on the side of caution, and always assume it can make a really big boom. Nimur (talk) 23:07, 8 January 2016 (UTC)
"Metal in contact with salt water" is not sufficient to make a galvanic cell. You need two different conductive substances in contact with the salt water and in contact (or at least electrically connected) with each other: it's the difference in electronegativity that drives the cell. The corrosion at one end of the wire certainly fits with the explanation that this is what was going on, but what was the other conductive substance involved? The cited references don't address that. --76.69.45.64 (talk) 20:47, 8 January 2016 (UTC)
From Corrosion, §Galvanic Corrosion:
"Galvanic corrosion occurs when two different metals have physical or electrical contact with each other and are immersed in a common electrolyte, or when the same metal is exposed to electrolyte with different concentrations." (Emphasis added).
The effect occurs in any condition, as long as there is a potential energy gradient - even if there's only copper; the most common textbook example is a bimetallic junction, or a set of copper and zinc plates separated and immersed in an electrolyte; but there are lots of permutations on the theme. It's also possible that the copper wire might have been in electrical contact with some other metal, or some minerals or impurities in the sand, .... Nimur (talk) 23:07, 8 January 2016 (UTC)
Okay, that makes sense about varying concentrations, but I don't see how it could apply when we're talking about seawater. The sea is pretty well mixed. Other metal seems a more likely possibility but if that was it then I would have expected it to be mentioned. --76.69.45.64 (talk) 07:37, 9 January 2016 (UTC)
Hmmmm, this raises questions in my mind concerning ground potential. If you have a very long, very highly conductive cable buried in the ground, if the ground is not as conductive, can passing storms or tides or other ocean phenomena create charge analogous to a thunderstorm, relative to areas far further inland? Wnt (talk) 16:08, 8 January 2016 (UTC)
I'm not sure I completely follow your question, Wnt: in this context, electric charge is not created or destroyed - just moved around. But, you are correct that long electric transmission lines may hold potentials between different locations (superimposed on top of the man-made electric potential that is applied by the various power stations and power distribution terminals). One cause for those unwanted electric potentials can be geomagnetic or atmospheric static electric potential. This manifests as unwanted surge-current, unexpected line impedance, and basically just excess wasted electrical energy in the power distribution system caused by natural phenomena. Reciprocally, power lines can affect those natural phenomena, too - they can interact with geomagnetic and atmospheric electric potentials. When we are moving around gigawatts of power - which our electric power grid does every minute of every day - those effects are actually measurable. Nimur (talk) 16:43, 8 January 2016 (UTC)
@Nimur: Sorry, by "create charge" I meant "create a concentration of charge in a certain area". What I mean is that if enough of a voltage difference comes to exist between the two ends of the cable, it should create a potential (pardon the pun) for hydrolysis and hydrogen formation to occur, which would not be the case if, say, someone dug up the cable and cut it in a few spots. So I'm wondering how "live" a dead cable can become, based on weather, and whether that weather includes only the familiar thunderstorms, or if winds and currents in the ocean can create a whole different basis for voltage differences versus areas far inland. Wnt (talk) 19:32, 8 January 2016 (UTC)
Suffice to say that such phenomena are rare. Most of the time, corrosion does not evoke hydrogen gas in sufficient quantity to create a hazard. Nimur (talk) 23:07, 8 January 2016 (UTC)

slower/bigger molecule = bigger cross-section / more reactive? (second-order / SN2)

I'm sorry for starting another question again, but this issue surprises me. In chemistry classes I get taught that SN2 / second-order reaction rates mainly influenced by cross-section (included in the kinetic constant) and collision rate (affected by concentration and order of reaction). This GC/MS study has eluent data that indicates larger primary amines react more quickly or more efficiently with dichloromethane than smaller n-alkyl amines. (Figure 1, page 2.) For each pair of peaks, case the imine ("product") elutes after the amine ("analyte"). This surprises me. For example, (integrating by eye) octylamine : octylmethylimine ratio is approximately around 2-3:1, but when n-decanamine (10 carbons) is analyzed, the situation is reversed: imine product now outnumbers the amine analyte ~3:1. For C12, the imine peak looks bigger as well, predominates over the amine peak for C14, and for C16 and C18, according to the researchers, "nearly all the analyte " is consumed.

Why would amines with bigger alkyl chains react more completely with dichloromethane? Is it an issue of solubility or polarity? Does a bigger chain in an aprotic (borderline) polar solvent (DCM's dielectric constant = 9.1, compared to DMSO's 47.2) increase nucleophilicity ? Or is it a matter of collision kinetics where a bigger molecule actually increases the reaction rate or probability of favorable collision? Yanping Nora Soong (talk) 23:36, 8 January 2016 (UTC)

Strangely this same question was asked here: https://www.reddit.com/r/chemistry/comments/403t1c/larger_straight_chain_primary_amines_eg_c16_c18/
Note that the methylene chloride is in vast excess compared to the amine in this experiment. So the rate of reaction would be proportional to the conentration of the amine. Also the amine will have been sitting in the solvent for plenty of time, so that the rate of reaction will have almost no effect, compared to the equilibrium achieved. So perhaps methylimines of shorter alkyl chains are less stable than of the longer chains. Graeme Bartlett (talk) 12:01, 9 January 2016 (UTC)
Perhaps the long chain amines form micelle s in the solvent. But I don't see what effect that would have, as the active part is on contact with the solvent. The longer chain compounds also had a lower molar concentration in the experiment as the mass concentration was kept constant, which would favour more of the end product. Graeme Bartlett (talk) 20:16, 9 January 2016 (UTC)
I thought of that too. I note that all the chains (C8 to C18) are present in the DCM at the same time. So for example, the C18 chains can still receive hydrogen bonding from the C8 chains. But the longer-chains still reacted more completely. Yanping Nora Soong (talk) 23:41, 10 January 2016 (UTC)

January 9

When is technology not just equal to applied science?

When is technology not just equal to applied science? Some people seem to reject the notion, but can't everything be explained by science, including things like a human using a machinetechnology without fully understanding it. --Scicurious (talk) 02:46, 9 January 2016 (UTC)

Are you asking if there is technology that's beyond science's ability to understand? Sure - there are plenty of things we don't fully understand. There are (for example) many drugs that we know to be effective but we don't fully understand why. I'm sure I could come up with any number of examples given time.
Are you asking whether there are things that science knows that we don't yet have the technology to use? Yes, sure - we're still struggling to use quantum theory to build practical computers - or applying what we know about the behavior of gasses and liquids to provide useful application of turbulent flow.
Certainly there are things that can't yet be explained by science. We don't know what caused abiogenesis, we don't know what caused the big bang. An entirely new particle may have been discovered by the Large hadron collider that lies outside of the standard model. We don't understand everything about the human brain. So sure - there are plenty of things we don't yet understand. That's not to say that these things can't ever be explained by science.
I'm not sure what you mean by "a human using a machine". We don't fully understand every aspect of how a human works - so I guess we don't fully understand how a human uses a machine...or anything else for that matter.
We should be careful, however, to avoid reading any kind of mysticism into this. We often don't know something because we haven't spent the money to do the research yet...there are few phenomena that we don't believe we'll every be able to comprehend through the lens of the scientific method...but there are a few. We know (because of chaos theory) that we won't ever be able to accurately predict the weather more than a few days into the future. There are sharp limits in a few of these cases.
SteveBaker (talk) 03:38, 9 January 2016 (UTC)
  • I actually meant someone using a technology without understanding it. Indeed, I wonder if a working technology becomes automatically a scientific discovery (albeit in a rather disorganized way). Using bark of willow trees to treat fever or pains does not require to understand it. However, there was a hypothesis (willow to treat pain), a test, and a conclusion. That would us lead to the view that any technology has a scientific theory backing it somehow.--Scicurious (talk) 01:45, 10 January 2016 (UTC)
For the "human using a machine" part, perhaps the OP refers to the way in which people adapt so readily to certain technologies that they really become an extension of our own bodies. Driving a car is one simple example. Despite never having evolved to use a steering wheel and brake and accelerator pedals, it soon becomes totally automatic for us. For an even more interesting example, when electrodes are implanted in the brain of some blind patients, in the form or an array of pixels, that person soon learns to see. Apparently connections form and the brain "learns" how to read the info from those electrodes as an image, almost like magic. StuRat (talk) 06:50, 9 January 2016 (UTC)
Actually I think the exact mechanism of virtually every single medication is unknown, except aspirin which was only recently understood. Quite entertaining to read FDA trial (BTW, despite tediousness, it is always interesting to read trial methods and results. Doctors often don't and I am amused at when they prescribe things like "extended release" medication but haven't bothered to see what the drug company did.). But to get to the OP, at some point there may be artificial intelligence where the thing applying science is not human. Currently most every decision tree in technology is simply a human type of choice based on a human understanding of science. The last time I looked, neural networks were the closest thing to deviate from a human application of science into a machine based application of experience. Learning machines aren't necessarily scientific and humans have lived many generations by surviving on experience without understanding science. Aqueducts existed long before Newton and Bernoulli - we describe them now using science as we know it but the technology existed before the science. --DHeyward (talk) 10:19, 9 January 2016 (UTC)
Where on Earth did you get the idea that "the exact mechanism of virtually every single medication is unknown"? In your example, it has been know since 1971 that aspirin is a prostaglandin/thromboxane suppressor, most certainly not 'recently understood'. Virtually all new drugs are explicitly designed to exploit certain mechanisms. To the OP, I am personally very happy that we can't explain everything by science, or I would be out of a job as a scientist! Fgf10 (talk) 21:40, 9 January 2016 (UTC)
I suppose DHeyward mean the 'exact' in a more strict sense, meaning we cannot know all and absolute all about the effect of a substance. Scicurious (talk) 01:45, 10 January 2016 (UTC)

Is there something in alternative medicine that's not considered as pseudoscience?

92.249.70.153 (talk) 05:28, 9 January 2016 (UTC)

If the definition given in alternative medicine is accurate, then the answer should be "No". ←Baseball Bugs carrots05:31, 9 January 2016 (UTC)
Pseudoscience is false science, something that is presented as scientific when it is not. Alternative medicine is a broad category that can include traditional medical practices and religious beliefs from different parts of the world, and these are generally not regarded as scientific in the first place. In that case alternative medicine could be unscientific without being pseudoscientific. --Amble (talk) 05:53, 9 January 2016 (UTC)
Herbal medicine was basically traditional, but many herbs and fungi do have medical effects that have been taken advantage of by modern medicine. Some of mainstream medicine is also pseudo-science, but we may have to await time to pass before misconceptions fade. Graeme Bartlett (talk) 05:59, 9 January 2016 (UTC)
Agreed. For example, aspirin comes from willow bark, and at one point willow bark would have been used, but unproven, as an alternative medicine. Alternative medicine means it hasn't been scientifically proven to work yet. So, when the actual science is done, some are found to work, and others do not. Then there's also the problem that, there being no requirement for scientific evidence, they can also try to sell anything for any purpose, even when they know it doesn't work. For example, there is a chiropractic core that seems plausible, which is adjusting misaligned vertebrae, but then, being largely unregulated, they also make outrageous claims that they can cure anything and everything simply by manipulating the spine. StuRat (talk) 06:43, 9 January 2016 (UTC)
The short answer is that once something is proven to work it becomes "medicine" rather than "alternative medicine." Shock Brigade Harvester Boris (talk) 23:18, 9 January 2016 (UTC)
And when it's proven not to work but people keep doing it, it becomes "pseudo-science". ←Baseball Bugs carrots23:27, 9 January 2016 (UTC)
Examples of near current medical practice that could be classed pseudo science are Lobotomy, Psychoanalysis, Antibiotic misuse and some drugs, particularly those withdrawn or proved ineffective. Graeme Bartlett (talk) 00:10, 10 January 2016 (UTC)
The "science" behind the first two of those items has been questioned, to say the least. Misuse of any drug or substance is questionable too. On the other hand, the use of leaches leeches was considered pseudo-science for a long time, but now it's back within the realm of science, for certain specific situations. ←Baseball Bugs carrots00:27, 10 January 2016 (UTC)
See wikt:leach and wikt:leech.—Wavelength (talk) 00:34, 10 January 2016 (UTC)
OMG. A misspelling. ←Baseball Bugs carrots00:41, 10 January 2016 (UTC)
That's not really accurate. The thing that was and still is pseudoscientific is the idea that many diseases can be fixed by removing blood from the patient. This was part of humorism, the idea that diseases are caused by an imbalance in the body of the "four humors", of which one was blood. Medical leeches are used today for things where the blood circulation is involved, like grafts and phlebitis, and of course the treatment is administered to the relevant body parts, rather than just sticking leeches all over. --71.119.131.184 (talk) 01:58, 10 January 2016 (UTC)
That's why I said "for certain specific situations". ←Baseball Bugs carrots18:50, 10 January 2016 (UTC)
It's difficult to show when alternative medicine isn't pseudoscience, because either (a) it is confirmed and incorporated into "regular" medicine, (b) it is pseudoscience, or (c) mainstream medicine hasn't caught up yet and will not admit anything until it does. For example, consider the news about a miracle cure for cataracts from last year: lanosterol, administered on the eye that clarifies cataracts. Well, two thousand years ago, Roman doctors prescribed cyclamen salves for cataracts, and cyclamens are loaded with triterpenes, the class of compound from which lanosterol is drawn. But I don't expect to hear any acknowledgement given at least until the patent expires... Wnt (talk) 11:36, 10 January 2016 (UTC)
I would strongly disagree with the implication given by (c) mainstream medicine hasn't caught up yet and will not admit anything until it does. I believe this is actually the very opposite of what happens in reality %99.9 of the time. If ever in the 0.01% "alternative medicine" is ahead of "mainstream medicine", it's purely by dumb chance. To really plumb the depths of this argument you have to delve into epistemology and what it actually means to "know" something, but if you study science and the wisdom of the "ancients", like TCM or Ayurveda, it's not hard to see there was very little good reason (or evidence) to suggest they have ever been "ahead" of mainstream medicine in any way. Vespine (talk)

Mirrors flip...

Mirror image#In three dimensions answers the old question "Why do mirrors flip left and right, but not top and bottom?". The oldest source used there is a paper from 2000, citing another from 1998. But of course, answers have been given much earlier, as in Nature 353, 1869 (but I can't see a preview in my country). As far as reconstructable: What was the first publication of a) the question, b) an answer, c) a somewhat thorough answer? --KnightMove (talk) 10:13, 9 January 2016 (UTC)

That paper is from 1991, not 1869. 1869 is the year Nature was first published. -- BenRG (talk) 21:01, 10 January 2016 (UTC)
A start might be found in Chirality. --DHeyward (talk) 10:31, 9 January 2016 (UTC)
This term was first coined in 1893, only after the Nature article above had been published - and I'm actually not quite sure whether this concept is that important for understanding the answer to this question, even though it is undoubtedly closely related. --KnightMove (talk) 10:43, 9 January 2016 (UTC)
To answer the first part, the oldest publication of the question was in the Timaeus by Plato followed by Lucretius in the poem De Rerum Natura. As to who first published the answer to the question of what's known as the "Mirror paradox", I can't find an answer. Maybe that's because people still argue about what the right answer is - see . However, there does need to be some serious work done on our article Mirror image as it only has one citation. Richerman (talk) 11:18, 9 January 2016 (UTC)
Try lying on your side and you'll see a mirror does not flip left and right as it does not flip your head and toes.Dmcq (talk) 11:28, 9 January 2016 (UTC)
I use a mirror on the floor, or the ceiling, as example. --KnightMove (talk) 11:48, 9 January 2016 (UTC)
This is a more a Humanities question, looking for the first known presentation and refutation of a myth; both probably date to prehistory. To natives of a space station it would be obvious enough it flips top to bottom, since they would flip around to an upside-down orientation with left and right preserved all the time. Wnt (talk) 13:54, 9 January 2016 (UTC)
The original post asks about the first publication of the question and answers - that doesn't date to prehistory. Richerman (talk) 16:00, 9 January 2016 (UTC)
There's a paper with practically exactly the same title as how you phrased it at
Why do Mirrors Reverse Right/Left but not Up/Down by N.J.Block
That dates to 1974 but it talks about it as a common question. Dmcq (talk) 17:26, 9 January 2016 (UTC)
Y'all are overthinking this. Flat mirrors don't really "flip", they merely reflect your image straight back toward its source, and to us it looks flipped. ←Baseball Bugs carrots17:50, 9 January 2016 (UTC)
I think everybody in the discussion here understands that, Bugs. The question is not about what mirrors do, but when what they do was first explained in print. {The poster formerly known as 87.81.230.195} 2.123.25.88 (talk) 19:44, 9 January 2016 (UTC)
A mirror does not reverse left and right. A mirror reverses front and back. This is equivalent to a reversing of left and right with a 180 rotation. Robert McClenon (talk) 21:09, 10 January 2016 (UTC)
Correct answer. I figure this out when I was young. I ask myself why "mirror flip left and right but not up and down. So in space where there is no up or down, how does the mirror know which direction is up and down so as to not to flip?". My conclusion is same as yours. Mirror flip front and behind instead of flipping left and right. 175.45.116.66 (talk) 00:35, 11 January 2016 (UTC)

does Truvada (used in anti-HIV pre-exposure prophylaxis) have significant binding activity to human telomerase reverse transcriptase?

Our article on one of Truvada's reverse transcriptase inhibitors (tenofovir disoproxil says that tenofovir only weakly interferes with DNA polymerases. But what about human telomerase reverse transcriptase? Can PrEP or HAART cause premature aging?

I also ran a pBLAST comparing the homology of human versus HIV reverse transcriptases as follows. Is there enough homology for there to be a good chance for Truvada to affect human telomerase reverse transcriptase? While running the pBLAST alignment of the hTERT FASTA sequence in the Uniprot virus database I also noted that human reverse transcriptase's closest match is a protein/gene product BHLF1 in the Epstein-Barr virus. Does that mean that novel antiviral treatments targeting the Epstein-Barr virus might be more likely to interfere with human reverse transcriptases?

Query= sp|O14746|TERT_HUMAN Telomerase reverse transcriptase OS=Homo sapiens
GN=TERT PE=1 SV=1 Length=1132
Sequences producing significant alignments:                      Score (Bits)  E-Value
Query_129897  1HMV:D|PDBID|CHAIN|SEQUENCE                          18.9    1.0  
ALIGNMENTS
>Query_129897 1HMV:D|PDBID|CHAIN|SEQUENCE
Length=440
 Score = 18.9 bits (37),  Expect = 1.0, Method: Compositional matrix adjust.
 Identities = 12/40 (30%), Positives = 18/40 (45%), Gaps = 8/40 (20%)
Query  212  VPLGLPAPGARRRG--------GSASRSLPLPKRPRRGAA  243
            V LG+P P   ++         G A  S+PL +  R+  A
Sbjct  90   VQLGIPHPAGLKKKKSVTVLDVGDAYFSVPLDEDFRKYTA  129
 Score = 16.9 bits (32),  Expect = 3.9, Method: Compositional matrix adjust.
 Identities = 8/29 (28%), Positives = 12/29 (41%), Gaps = 6/29 (21%)
Query  236  KRPRRGAAPEPERTPV------GQGSWAH  258
            K P  G   +P +  +      GQG W +
Sbjct  311  KEPVHGVYYDPSKDLIAEIQKQGQGQWTY  339
 Score = 16.9 bits (32),  Expect = 4.0, Method: Compositional matrix adjust.
 Identities = 6/14 (43%), Positives = 9/14 (64%), Gaps = 0/14 (0%)
Query  711  VDVTGAYDTIPQDR  724
            +DV  AY ++P D 
Sbjct  109  LDVGDAYFSVPLDE  122
 Score = 16.5 bits (31),  Expect = 5.3, Method: Compositional matrix adjust.
 Identities = 8/20 (40%), Positives = 11/20 (55%), Gaps = 6/20 (30%)
Query  632  PIVNMDY------VVGARTF  645
            P+V + Y      +VGA TF
Sbjct  421  PLVKLWYQLEKEPIVGAETF  440
 Score = 16.2 bits (30),  Expect = 5.6, Method: Compositional matrix adjust.
 Identities = 23/105 (22%), Positives = 37/105 (35%), Gaps = 7/105 (7%)
Query  745  AVVQKAAHGHVRKAFKSHVSTLTDLQ--PYMRQFVAHLQETSPLRDAVVIEQSSSLNEAS  802
            A +QK   G     ++ +     +L+   Y R   AH  +   L +AV    + S+    
Sbjct  327  AEIQKQGQGQW--TYQIYQEPFKNLKTGKYARMRGAHTNDVKQLTEAVQKITTESI--VI  382
Query  803  SGLFDVFLRFMCHHAVRIRGKSYVQCQGIPQGSILST-LLCSLCY  846
             G    F   +           Y Q   IP+   ++T  L  L Y
Sbjct  383  WGKTPKFKLPIQKETWETWWTEYWQATWIPEWEFVNTPPLVKLWY  427

Yanping Nora Soong (talk) 11:04, 9 January 2016 (UTC)

The sequence isn't really a good way to guess this, but typically, much more closely related sequences are still discriminated between by small molecule therapeutics. This has no more than three amino acids conserved in a row. So the only issue is whether the drug is chemically, mechanistically, so closely tied to the act of reverse transcription that any RTase will be affected -- actually, looking into it on PubMed (you must learn to use this - it gave me the answer right off) I get this study. However, IMHO that study is immoderately worded, in that people vary a lot in telomere lengths with little obvious impact on their health. The whole idea that telomere shortening cause aging is just one of several competing models, and I think it's running well behind right now. That article points out they may lengthen again simply with diet and exercise, and of course, shortening serves as a useful safeguard against cancer. Also note that this study looked at effect of 123 weeks average usage, and HIV itself interferes with telomerase and of course otherwise shortens lifespan, so pre-exposure prophylaxis still seems to have a strong rationale. Wnt (talk) 13:15, 9 January 2016 (UTC)
PrEP in NYC is aggressively marketed towards high-risk populations (LGBT homeless youth, sex workers, sexual assault victims with a high risk of revictimization) but with recent ads on the subway beginning to be marketed towards non-queer people as well. However, impacts on human telomerase reverse transcriptase are almost never discussed. I'm trying to calculate the social utility of such widespread acceptance given the likelihood of HIV infection without PrEP, versus longer-term pro-aging deleterious effects from the PrEP if HIV infection never occurs. After all, the risk of infection from body fluid exchange with an HIV-positive individual is about 1 in 300. (According to the pamphlets they hand out in rape kits.) The literature is ambiguous on the issue. in part, because HAART has historically been prescribed to HIV-positive people only. I guess my key question is, "what is the risk of accelerated aging on PrEP / HAART if HIV infection risk remains low?" (i.e. if the opportunity never arises for PrEP to be able to "do its job" and prevent a nascent infection from taking root in the body.) Yanping Nora Soong (talk) 14:34, 9 January 2016 (UTC)
The short answer is that as of now there exists absolutely no evidence that HAART causes premature aging. Ruslik_Zero 20:08, 9 January 2016 (UTC)
Premature and accelerated aging: HIV or HAART?: "We propose here that the premature and accelerated aging of HIV-patients can also be caused by adverse effects of antiretroviral drugs, specifically those that affect the mitochondria. The nucleoside reverse transcriptase inhibitor (NRTI) antiretroviral drug class for instance, is known to cause depletion of mitochondrial DNA via inhibition of the mitochondrial specific DNA polymerase-γ. Besides NRTIs, other antiretroviral drug classes such as protease inhibitors also cause severe mitochondrial damage by increasing oxidative stress and diminishing mitochondrial function."
Yes, they propose but offer no evidence. In fact, even in round worms it is not proven that HAART actually shortens lifespan. Ruslik_Zero 18:31, 10 January 2016 (UTC)
Expanding on my previous comment. The HIV-infected individuals who does not have other risk factors (which they are often have), are properly treated, start the treatment early and strictly adhere to the regimen may plausibly have the same life span as the general population. In other words the fact that HIV-infected individuals have shortened lifespans is disputed. See, for instance, this and this. The life expectancy has actually grown significantly during the last 20 years and now is approaching that of the uninfected population (excluding some sub-populations like drug users). Ruslik_Zero 18:56, 10 January 2016 (UTC)
Wnt has given a great answer, but just to emphasise, as he has not explicitly said it, your alignments are not very similar at all. Also, as he correctly states, even very similar sequences can be discriminated by small molecules, this is partly because the primary sequence (what you have aligned) is to an extent meaningless. The actual chemistry happens in the 3D folded protein. Similar primary sequences may still have different 3D structures. Fgf10 (talk) 21:45, 9 January 2016 (UTC)
A PatchDock and PyMol assay would take hours to properly evaluate, but I'll do that later. Yanping Nora Soong (talk) 00:40, 10 January 2016 (UTC)
@Fgf10: I'm surprised to hear you say that - I know you are familiar with the field - my impression has been that when primary sequence is conserved, there's usually a reason, which is to say, conserved structure(s). Even the moderate level of homology depicted is sufficient that I would expect that key amino acids from the active site can be extrapolated from one protein to the other.
@Yanping Nora Soong: It's true that you can never be sure in biology, and unless long term study of this specific issue is done in people using the preexposure prophylaxis, we don't know for sure. But there is no rule against guessing, and my guess is that telomeres seem to be in an active equilibrium. As I described above, various factors seem to change their length one way or another. Various tissue stem cells, at least, are free to renew them at any time. So my guess would be that any telomere-shortening effect of the drugs may turn out to be a "kinetic" effect, operating in the relatively short term, while the overall equilibrium of the telomeres is more likely to be unchanged and eventually restore itself. On the other hand, note that in the paper I cited above, a four-week protocol didn't in and of itself change telomere length significantly. The question is whether you need to inhibit telomerase activity for a long time to make telomeres notably shorter, and then they restore themselves anyway if the drug is removed -- but that paper didn't really answer it, because the "off-NRTI" error bars in Figure 3 are atrocious - I mean, there are only so many things you can figure out with under a dozen people, and answering this isn't one of them. Wnt (talk) 02:09, 10 January 2016 (UTC)
@Wnt:, that is certainly true in a lot of proteins, my point was that you can't make any a priori assumptions on 3D structure and binding based on primary structure alone. Fgf10 (talk) 13:05, 10 January 2016 (UTC)
Well, there are sites that encourage such a priori assumptions, such as this one - even, in that case, encouraging extrapolation from catalytically active enzymes to one which seems catalytically inactive (unless I missed something recent). And honestly, I think that is quite likely still a valid extrapolation to make, because at the moment the secondary structure actually changes to some not previously existing, the new structure is not very stable and there is going to be need for a lot of rapid evolution to develop new contacts, while the old ones will no longer be conserved. Wnt (talk) 13:29, 10 January 2016 (UTC)

Which organs of human body can be regenerated?

92.249.70.153 (talk) 15:13, 9 January 2016 (UTC)

There is no human anatomical organ that can completely regenerate, though advances in Tissue engineering are opening possibilities for organ growth in the laboratory. A laboratory-grown penis was reported in 2008 but the concept has as yet been tested only on rabbits. Wound healing is the process by which skin or other body tissue repairs itself, in which Angiogenesis is the vital process of regeneration; it is however also a step in the transition of a tumor from a benign to malignant state, requiring the use of angiogenesis inhibitors in cancer treatment. AllBestFaith (talk) 15:48, 9 January 2016 (UTC)
Unless I'm missing something, you seem to have paraphrased the lead of that article, but presented the material in a way that suggests a full quote, to misleading effect. The original wording is markedly more accurate, if a little un-clinically phrased: " liver is the only visceral organ that possesses remarkable capacity to regenerate." Your alternative ("The liver is the only human internal organ capable of natural regeneration of lost tissue" is just plain wrong; all human organs possess some capacity to regenerate some degree of tissue. This is true, to very limited extent, even of the brain-- despite its quasi-folkloric reputation for being unable to generate any new cells. Snow 19:40, 9 January 2016 (UTC)
I quoted the first sentence of the liver regeneration section of the liver article exactly, and linked to both main articles. Here's the Hausinger, Dieter, p.1 ref from the liver article.

Your skin regenerates itself every 27 days. Richerman (talk) 17:42, 9 January 2016 (UTC)

el Chapo escape engineering

When el Chapo escaped from prison six months ago, it was through a tunnel constructed from a mile away, a remarkable feat of engineering given that the tunnel had to connect to a small opening in the prison cell. What knowledge and skills did the builders have to have in order to do it? --Halcatalyst (talk) 17:51, 9 January 2016 (UTC)

You're assuming that the prison personnel were honest. ←Baseball Bugs carrots18:01, 9 January 2016 (UTC)
Where has he assumed that? Anyway, to actually address the OP's question, it might help you to know that this is not Guzman/Sinaloa's first impressive tunnel; long before the Altiplano prison break, the cartel was known for constructing kilometers-long tunnels in several towns which served as major hubs in their drug trade. They have been said to have been extremely high-tech, refined and stable. So, while I know of no reports explicitly claiming that individuals with formal engineering backgrounds were involved, I would dare say that it is a reasonable presumption and barely feel as if I was brushing up against WP:OR :).
What I'd like to know is how they figured it out. What knowledge and skills would an honest engineer have to have to accomplish something similar? --Halcatalyst (talk) 20:05, 9 January 2016 (UTC)
In terms of what degree of education would they have to have attained? Or the exact kinds of structural principles they would have to know? If the latter, it's a little outside my wheelhouse, so I'll wait for someone more versed in this area to provide the real detail and sources on the specific skills, but certainly, at a minimum, the designer would have to be familiar with loads, excavation, ventilation, basic electrical engineering (the tunnel was lit), surveying, and I'm sure at least one or two major obvious areas I must be neglecting. Obviously a cut and cover tunnel was not an option in this instance and this restriction, along with the limited size of the entrance and the need for secrecy, suggest a fair amount of manual labor. It's also worth noting that the shaft which connected with Guzman's cell had to enter through the sub-structure of the prison and negotiate its way to the cell, all without arousing suspicion, which would have required an even more heightened degree of planning and coordination. Some photos and dimensions for the tunnel can be found here. Snow 20:36, 9 January 2016 (UTC)
I don't think they would need to know anything about electrical engineering to plug in lots of extension cords and work lights. You can buy all that at any home improvement store. I'm not sure if the outside end of the tunnel was on the power grid, but, if not, some electrical generators, also available at the home improvement store, could power the lights. It might seem surprising they bothered to add lights to a one-time use tunnel, rather than use portable lights. But I suppose the electrical lines were also used to power digging and ventilation equipment, so then they might as well plug in lights, too. StuRat (talk) 23:51, 9 January 2016 (UTC)
Running a generator inside a mile long tunnel is a bad idea. As is connecting an extension cord to an extension cord. It might not melt with only two but I'm sure you need many many cords for a mile. Sagittarian Milky Way (talk) 23:58, 9 January 2016 (UTC)
Of course you wouldn't put the generator in the tunnel. You would put it at the entrance. StuRat (talk) 04:54, 10 January 2016 (UTC)
Agreed, thousands of feet of extension cord and workshop lamps strikes me as a silly and unlikely notion (unless the contractors were Clark Griswold and Red Green!), for a number of practical reasons and especially in light of the complexity of this operation. It's difficult to make out, but if you look closely at the pictures of the Altiplano tunnel, they seem to be employ building wiring, which would be consistent with most every Sinaloa tunnel uncovered so far: . Snow 03:01, 10 January 2016 (UTC)
Common 12-gauge home wire has a resistance of about 1.6 ohms per 1000 ft, or 3.2 ohms counting both conductors. By Ohm's law we get a voltage drop of 3.2 volts per amp of current per 1000 ft. There's probably not a huge amount of current being carried. So even with ordinary house wire the voltage drop would be tolerable for running light bulbs or other applications that aren't particularly sensitive to voltage. Using heavier wire would reduce losses further. Shock Brigade Harvester Boris (talk) 04:00, 10 January 2016 (UTC)
You can get 12-gauge or thicker extension cords, too, and taking the time to splice wires, etc. in the tunnel seems unwise. Better to use components you can just snap together. StuRat (talk) 04:56, 10 January 2016 (UTC)
With such tunnels the most impressive asset is the willingness to put in the work. The rest seems pretty optional. Mining (military) has gone on since ancient times, and this is a fairly straightforward instance. Ventilating a straight shaft doesn't seem exceptionally complex. Tools required, as at Stalag Luft III, may be rather minimal, depending on terrain. One impressive bit is to come out at the right point, but I presume they would have arranged some help from inside, i.e. for Guzman to make some kind of distinctive noise for them to hone in on. Though a tape measure, laser level, and aerial photos of the facility would all help.
What surprised me more was the lack of preparation when Guzman was caught. He had a way out of where he was staying to the sewers, sure ... what he didn't have was a car with a secret compartment waiting to be driven away by loyal men at the far end. He was caught because he had to steal a car to try to get away -- and that, I don't understand. Wnt (talk) 02:35, 10 January 2016 (UTC)
I'm confused, here. I hadn't heard any reports that his eventual capture this last week had anything to do with his stealing a car immediately following the escape; can I ask where you came across this detail? Snow 03:05, 10 January 2016 (UTC)
Here. Wnt (talk) 11:38, 10 January 2016 (UTC)

Opposite sex monozygotic twins without Turner syndrome

Is it theoretically possible to produce monozygotic identical twins of opposite sexes without inherent Turner syndrome in the girl, presumably by correction of natural fertilization or by artificial insemination? Brandmeister 20:16, 9 January 2016 (UTC)

They would not be identical in that case. Even if the only difference was that one had an X chromosome and the other had a Y. A transgender process could artificially alter one of a twin, so it is theoretically possible. Also there could be an error on a birth certificate, so it could be geneologically possible. Graeme Bartlett (talk) 20:27, 9 January 2016 (UTC)
"Error on the birth certificate" sounds like a very cisnormative thing to say tp me. Yanping Nora Soong (talk) 00:44, 10 January 2016 (UTC)
In what way? I read Graeme's scenario as saying that two identical children of the same sex were born, but one was mistakenly listed as the other sex. People can construct as many cultural genders as suites them, but there are only two genetic/physiological/developmental sexes--and the rare intersexed individual, of course). Mind you, I think it's an exceedingly odd notion to suggest that we would regard that situation as an example of "dual-sexed identical twins" as opposed to "a clerical error", but I don't see how gender identity comes into this at all. This is a question of the genetics of twins, not gender perception. Snow 03:23, 10 January 2016 (UTC)
There's a good chance I'm Klinefelters (also met one other Klinefelters' individual in my life). Intersex people are not as rare as you think -- they make up about 1% of the population. Of course, it's typical for members of the cishet patriarchy to underestimate the population frequency of queer people... and how would you define each "genetic sex"? Based on one sex having an active SRY gene and the other not? But what then there's the issue of epigenetic DNA methylation (due to environmental factors affecting the mothers, or childhood development of the mother) of sequences associated with sexual development, androgen insensitivity syndrome, SNPs which make regulatory sequences less effective at binding estrogen or SRY. Just because the two common chromosomal outcomes are 46,XX or 46,XY doesn't mean expression/regulation/transcription of sex-specific sequences are confined to two individual sharply-contrasting patterns. Yanping Nora Soong (talk) 05:26, 10 January 2016 (UTC)
1) Let's drop the "cishet patriarchy" stick. As a cognitive scientist, I can assure you am broadly supportive of the notion that transgender identities are an empirically verifiable consequence of human neurophysiology. Even putting my understanding in this area aside, I'd not be inclined towards dismissive bias with regard to self-determination in this area. Nobody is trying to minimize anyone's right to their perceived gender here. 2) Intersex people do not make up 1% of the population (I presume you are speaking global population), but even if they did, I would still consider 1% "rare", so that accounts for that apparent (but not actual) difference in perspective. 3) I'm not sure you are entitled to speak for all intersex people in classifying them (on their behalf) as part of queer culture, which is a social construct and not de-facto relevant to their physical make-up. I know you're not the first person to make that assessment, but to me it's an apples and oranges matter 4) No one said anything about "sharply contrasting"; clearly the genetic make-up of all individuals of our species (male and female) is such that we all share more in common with eachother than we do with any other organism. But if you know of any serious research which supports the notion of a third physiological sex in the human race (or any sexual species), I'd be genuinely flush with fascination to hear about it. 5) In any event, I still fail to understand what in Graeme's proposed scenario relates to a transgendered issue. He proposed a theoretical in which two identical twins are miscategorized solely on the basis of their physical sex organs, not their later gender identities, which would not be in evidence in a newborn that has neither concrete concepts of gender nor linguistic skills to share them if they did... Snow 05:52, 10 January 2016 (UTC)
The problematic issue is your emphasis on a distinction between a "physiological sex" versus a "socially-constructed gender". But sex itself is also a social construction, and conflicts with gender identity frequently arise because of differences in genetic expression or neural structure compared to the archetypical members of their assigned sex. Would you assign all 47,XXY individuals to the male sex, based on this notion of "natalism" and physiological sex, even for individuals with a genetic transcription pattern and neurophysiological features very unlike 46,XY individuals? Sex is not a continuous variable, not a binary one. Yanping Nora Soong (talk) 06:20, 10 January 2016 (UTC)
Sorry, but while I agree with some of the points that you make at the periphery of your main argument, the fact of the matter is that your interpretation of "sex" as a socially constructed phenomena is pretty idiosyncratic. Sex is clearly a physical term linked to specific phenotypical traits and physiological structures, first and foremost. That's the entire reason the English language began, in the modern world, to reflect a difference between this notion and gender, and they are overwhelmingly used in that manner. And most individuals in our species are generally (and non-controversially) regarded dichotomously in this manner. Clearly you have different ideas of how blurred these lines are for people in general. That's fine, but surely you recognize that the way you use some of these concepts and their associated terminology is non-congruent with the norm in established science in this area? Snow 08:48, 10 January 2016 (UTC)
Btw, no one but members of the cis-patriarchy would use a word like "transgendered," honestly. Yanping Nora Soong (talk) 06:20, 10 January 2016 (UTC)
Really? Drat, I'll just have to tell that to those of the transgender men and women that I know who use the term regularly...they'll want to know they aren't conforming with the newest prescriptivist rules and neologisms, I'm sure. I just hate giving bad news... Snow 08:48, 10 January 2016 (UTC)
"transgendered" is a slur. Better than some other words, but still an offensive misuse. see this article. Yanping Nora Soong (talk) 09:21, 10 January 2016 (UTC)
What I mean is that "sexing" a baby is inherently a subjective choice, based on the discretion of the sexer. Sex assignment is a phenomenon of oppressive social power -- it is not a scientific process to be "wrong" or "correct" on.
Anyway, the primary sequences of monozygotic twins may be identical but the endocrine exposure pattern, the DNA expression pattern, as well as the epigenetic methylation pattern may not be. In fact, there is a reported case of identical twins where one twin transitions and the other does not. A possible explanation would be that the fetuses were not subject to equal endocrine or signalling conditions in the womb. Yanping Nora Soong (talk) 06:26, 10 January 2016 (UTC)
Eh, sexing is really not that open to interpretation in a clinical environment. There's the occasional intersex newborn, as we've noted, but for unambiguous cases, physicians are not allowed to use their "subjective" assessments to label a child in a manner inconsistent with their sex organs. Certainly, if you feel making note of a child's physical sex (which is deeply important to the manner and quality of medical care they receive) is a "phenomena of oppressive social power", you and I are not going to see eye to eye on this. It's one thing to accept self-determination in an adult--it's quite another (very silly) thing to say that sex doesn't exist outside of our minds, or to treat it like an offensive notion, or to expect people who have to deal with it as a matter of clinical reality to tiptoe around necessary and practical medical and empirical terminology, lest they be accused of being a part of a system of "patriarchal injustice"... But this is getting to drift far afield from the OP's question, so I will leave my impressions at that, per WP:NOTAFORUM.
As to your point about epigenetics, I find that much less controversial. It's going to be exceedingly rare that one twin has pronounced gender dysphoria and the other doesn't, but it's not altogether outside the realm of possibility and I'm not surprised at all that cases have surfaced, even if the present one is an impressionistic report. Snow 08:48, 10 January 2016 (UTC)
Hmmmm. Isn't sex in a way a social construct when you sit down and class more than one karyotype as being one sex? Biological sex definition is that XX and XY are two different things. But when you say XXY is "more or less XY because of how it looks and acts", that's a bit more of a cultural call. Now of course, there's some biological basis in that XXY and XX often can produce offspring together - nonetheless, there is precedent for delineating multiple fertile or potentially fertile sexes in other species like naked mole rat, so it's not so so obvious we have to say XXY and XY are precisely the same sex. In practice, I find it convenient, but I've noticed most methods of classifying people that seem convenient to me eventually seem annoying to someone being classified. :) Wnt (talk) 11:49, 10 January 2016 (UTC)


January 10

Does an atomic explosion send EM waves radiation into outer space?

Does an atomic explosion send EM waves radiation into outer space? Would it reach other planets? --Denidi (talk) 03:26, 10 January 2016 (UTC)

By "EM wave" do you mean Electromagnetic radiation? If so, then yes and yes. The same could be said for radio stations, your cell phone, the light bulbs on your front porch, etc. etc. Shock Brigade Harvester Boris (talk) 03:38, 10 January 2016 (UTC)
My cell phone is sending EM radiation into outer space? --Denidi (talk) 03:43, 10 January 2016 (UTC)
Yes. The amplitude is of course quite small. Shock Brigade Harvester Boris (talk) 03:45, 10 January 2016 (UTC)
Important caveat: not all transmissions will be powerful enough to escape Earth's ionosphere/magnetosphere. Snow 03:50, 10 January 2016 (UTC)
That's my point. Otherwise why would E.T. the Extra-Terrestrial have bothered building a makeshift communicator to phone home? A cell phone would have been enough.--Denidi (talk) 03:53, 10 January 2016 (UTC)
Have you ever had trouble getting a cell phone signal? There's a reason deep space communications use high-gain antennas. You can't fight the inverse-square law. --71.119.131.184 (talk) 04:02, 10 January 2016 (UTC)
Well, bear in mind that cellphones didn't exist when ET visited, and also that there is a difference between whether some amount of the radiation escapes the earth's immediate environs and whether a recoverable signal does. And even if the signal escapes intact, depending on it's exact nature, it may be formatted in such a way that it comes across as indecipherable at the other end. Still, you might want to watch what you say, just in case you do unwittingly become our first envoy to a network of super-advanced alien civilations. :) Snow 04:09, 10 January 2016 (UTC)
Maybe not cellphones in the shape we think of now, but wireless telephones are not new - they've been around for about a century. ←Baseball Bugs carrots04:23, 10 January 2016 (UTC)
That's true, but I'm not sure I see the relevance. I mentioned the detail of the timing of cellphones because they were the technology referenced. Snow 04:32, 10 January 2016 (UTC)
What do you consider to be the distinction between cellphones and wireless phones? ←Baseball Bugs carrots04:34, 10 January 2016 (UTC)
A cell tower, a cell network, different transmitter technology, carrier-frequencies, and signal formats. A wireless phone shares about as much in common with a cellphone as a telecommunications device as either does with a walkie-talkie, really. But I guess I must still be missing your point, because I still don't understand the role you're implying for wireless phones with regard to the OP's question. Snow 05:09, 10 January 2016 (UTC)
The first cellular phone network started operation in Japan in 1979. E.T. the Extra-Terrestrial was released in 1982. It is true that North America, where the film is set, did not have a network in commercial operation until 1983, but hey, E.T.'s from a technologically-superior civilization, so maybe he's an early adopter! --71.119.131.184 (talk) 04:55, 10 January 2016 (UTC)
I stand corrected! I wondered if there were perhaps prototype networks before then; I suppose I ought to have reckoned on a Japanese precursor before '82. (and in any event, should have checked!) Snow 05:09, 10 January 2016 (UTC)
Let's try to bring this space Q back down to Earth: Would a nuclear explosion on or near the surface of the Earth create enough of an EMP to damage circuits of spaceships ? StuRat (talk) 04:51, 10 January 2016 (UTC)
Good idea, Stu. First, though, I'm going to discuss the main issue involved in general terms so the OP "gets" the answer.
Gamma rays are electromagnetic (EM) radiation. They are part of the "prompt radiation" emitted by nuclear weapons immediately on detonation. They are the important component triggering the Electromagnetic Pulse phenomenon, regardless of whether it occurs at or near the Earth's surface or over the ionosphere. This is the EMP phenomenon which grabs headlines for its potential to (according to some commentators) reverse-bias and destroy semiconductor junctions - hence, all manner of integrated circuits in cell phones, the computer you're reading this on, TVs and radios, and the computers and silicon-controlled rectifiers in all modern automobiles.
Now, the nuclear weapons test, a 1.4 megaton detonation, 250 miles over a point near its launch site, Johnston Island in the Pacific Ocean, may have disabled three satellites in low Earth orbit. Nuclear detonations at or near Earth's surface, however, would have EMP that propagated much closer to the Earth's surface - not into space. OP, please read our article Electromagnetic Pulse to understand the issue more fully. loupgarous (talk) 05:16, 10 January 2016 (UTC)
(EC) Well I'm not sure how that question brings the issue "down to earth" (in either a literal or metaphorical fashion), but the answer is that it would depend on the scale of the explosion and the particulars of the shielding on the hypothetical craft, as well as the nature of the circuitry. It would have to be a sizable explosion in order to exit the stratosphere, but existing armaments could accomplish it, under the right circumstances. (Edit: forgot that Stu specified a detonation on the surface of the planet. Not quite positive of this statement in light of that hypothetical). I'm sure if you dig about, you will find the ISS, by way of example, must have emergency protocols in event of a nuclear event. I'd bet money on that, but I'm short on time and can't search out the details just now; hopefully someone else can confirm or contradict that assumption. Certainly many militaries have invested in heavily shielded aircraft; it's really not that difficult an issue to address (although, again, everything is relative tot he strength of the blast). Snow 05:25, 10 January 2016 (UTC)
Well, it was a factually-based explanation. I don't see anything in your response to show how even a megaton-range detonation at or near the Earth's surface could impact a spacecraft (and in that description I include satellites). The Russian Tsar Bomba 50 Megaton weapons test is not recorded to have harmed any satellites. Neither has any other nuclear weapons test outside the STARFISH PRIME shot - certainly no nuclear weapons tests inside the troposphere. Of course, if you have hard data, not speculation or hand-waving showing otherwise, we'd be grateful to you for throwing light on the question. loupgarous (talk) 05:49, 10 January 2016 (UTC)
Uh, I think you may want to re-read my post, because nothing in it was intended to challenge anything you said. Point in fact, I wrote my post without seeing yours (EC="edit conflict") and my post is clearly threaded in response to Stu's inquiry, not your answer. I find your post makes complete sense. As to the "surface" issue, you will note that I already realized that I had misremembered that detail of Stu's hypothetical and struck/corrected my post accordingly. I don't think you and I are saying anything that is at all inconsistent. Snow 06:01, 10 January 2016 (UTC)
Whoops. Sorry, and you're right. Stu's point was well-taken, though, the discussion had wandered off for about 30 lines or so about cellphones and everything but what the OP asked about. Stu brought the discussion back to that.
While I regret the misunderstanding on my part, may I offer some constructive advice? It's important to focus on what the OP asked. Your additional points are actually interesting. According to this explanation simple solar flare activity is enough of a challenge to satellite designers to encourage them to build in a certain amount of protection against electromagnetic pulse. Part of that is shielding against electromagnetic pulse, and our article on radiation hardening technology explains what is done along those lines. loupgarous (talk) 06:32, 10 January 2016 (UTC)
I actually view it as broadly important to avoid protracted digressions on the ref desks, to source or wikilink any assertions, to avoid speculation, and generally be as consistent with WP:NOTAFORUM as we would for any other space on the project--or, at the very least, to do so with to the extent that the unique role of the ref desks allows. I think I actually have a reputation as a bit of a hard nose in regard to those positions. So can you be specific about where you think my comments have strayed off topic? The cell phone example was raised by Boris, embraced as a line of discussion by the OP and then questioned by Bugs; each of my comments in that line of discussion was a caveat to what someone else had said or an answer to a direct inquiry. As to Stu's question, I personally felt it was a bit of separate issue, since the OP just wanted to know whether the radiation could be detected in space, not what practical effects it would have on technology. Nevertheless, since Stu's question was a reasonable one in its own right, I just decided to treat it like I would any question that was asked in its own thread and supplied what information I could on the topic. I personally feel I've been as on-topic as any contributor in this thread, but if you feel otherwise I (genuinely and non-passively-aggressively) will take any observations under advisement. The gist of responses to the OP were meant to clarify that not all radiation escapes back out into space (not immediately, anyway), and the gist of my response to Stu was that we could only answer his question in broad strokes without having specific details for both the blast and the materials involved. Snow 09:09, 10 January 2016 (UTC)
The problem with Stu's point is they seem to be making an assumption the OP meant EMP.

The OP never said anything about EMP and although the mobile phone discussion may have gotten a little offtopic (particularly the part about whether or not they existed during ET and the distinction between wireless phones and cell phones), it started off from the OP's followup. It's easily possible (actually I think more likely) the OP doesn't care about EMP or potential damage to space ships and is most interested in whether a sufficiently advanced civilisation would be able to detect when someone has worked out how to do generate such explosions from a distance. (This is a common trend in science fiction.) Or maybe the OP isn't even thinking of others looking for such explosions particular, but making the assumption that a nuclear explosion is the most likely "unintentional transmission" to be detected (which I don't think is correct).

Ultimately we won't know unless the OP clarifies, but there's nor eason to assume the OP is particularly interested in EMP or damaged caused to space ships or stuff on other planets by nuclear explosions. Sturat may be interested in whether nuclear weapons on a planet may damage spaceships and there's nothing wrong with asking about it for personal knowledge, but such a question isn't inherently more on topic to the OP's question than whether or not ET could have used cell phones to communicate.

Nil Einne (talk) 09:36, 10 January 2016 (UTC)

I take the question as meaning could aliens on a planet around another star detect an atomic explosion on earth? Well the signal would certainly be strong enough - but compared to the sun I think it would count as noise whereas a television signal though much weaker could be distinguished fairly easily if they had a huge receiver. But then again if they had receivers spread apart in space they might be able to separate the earth and the sun by direction and so see the signal came from the earth rather than the sun. Dmcq (talk) 11:56, 10 January 2016 (UTC)
I think people need a kick here. What we're looking for is a profile of the energy emitted as a function of frequency and time. I came up with this as an example of what I want - it's pretty deficient in most regards but light years ahead of some of some of the bickering above. I know some folks here have a better impression of what a number like "100 KV/m" means relative to the local radio station, so please, give the rest of us some help. Wnt (talk) 13:39, 10 January 2016 (UTC)
Several people have mentioned specific tests (Tsar Bomba being the largest) - but one should bear in mind that these ground-based experiments were done back in the 1960's when there were very few satellites up there. Subsequent testing went underground, specifically to avoid the effects of the explosion being felt too severely above-ground. So the odds of one of that small number of satellites being damaged or disabled would be tiny even if there were some EMP or other effect involved. These days, space is crammed full of satellites - and even if the odds were relatively small, we might see some effect that was not noticed in the 1960's.
So I think we need to consider theoretical issues rather than anything that was measured at the time.
That said - the reports from Tsar Bomba said that the mushroom cloud went up to 64km high and the heat pulse was felt at ground level 270 km away - window panes were broken 900 km away. Well, the "edge of space" is generally considered to be 100km vertically upwards - so it seems very likely that a low earth orbit satellite that happened to be passing overhead at the time would feel a significant effect...but one in a geostationary orbit would not.
Whether an observer on (say) a nearby star would be able to detect the increased EM radiation depends on the sensitivity of their instruments and where the earth was with respect to the sun at the time. If they were unable to resolve the sun and the earth as separate points - then even the Tsar Bomba would be the tiniest blip compared to a solar flare. But if they had sufficient magnification to separate out Earth and Sun, and if the bomb went off on the side of the earth facing them - then I'd expect that an increase in Infrared and the visible spectrum to be noticed. Reports of a light and heat as intense as the sun from a distance of 270km through the atmosphere suggest that far more than that would penetrate through the clearer and more tenuous atmosphere vertically upwards...even with a simplistic model of the atmosphere where air was as dense as at ground level all the way up to space (100km) and then vacuum would suggest that at the edge of space, the intensity of heat and light would be 2.7 larger than people reported 270km away. The albedo of the earth is 0.3 so the explosion would produce a spot that would be certainly be at least 30 times brighter than the normal brightness of reflected sunlight. So I think that with enough magnification, these hypothetical observers would have had a chance at seeing it.
But that's a very speculative answer. Everything depends on the sensitivity and magnification of their instruments - and how lucky they'd be about the timing of the explosion relative to earth's orbit and time of day.
SteveBaker (talk) 14:52, 10 January 2016 (UTC)
SteveBaker (talk) 14:52, 10 January 2016 (UTC)

Benoit - binomial authority for Euphrictus squamosus?

Hi, RD/S folks,

I recently started an article about this beastie. (I'm terrified of even little spiders, so the "It is requested that an image or images be included in this article to improve its quality" tag on the talk page, I kinda hope it won't get fulfilled.)

The critter's genus article includes the text (that I paste with attribution): Originally, the species E.squamosus, a species of this genus, was described as Zophopelma squamosa, a Barychelid, by Benoit, in 1965.

Who is M./Mme. Benoit is in this context? --Shirt58 (talk) 10:01, 10 January 2016 (UTC)

Pierre L. G. Benoit. Here is his French WP page Pierre L. G. Benoit.--William Thweatt 10:26, 10 January 2016 (UTC)

Sleeping pill effects

When will sleeping pills start showing their side effects like dizziness and headaches etc... ? If any one takes 2 pills a day when will side effects start showing up ? — Preceding unsigned comment added by 175.101.24.136 (talkcontribs)

Our article on side effects is pretty minimal and links to adverse effect - which is presumably what you're interested in here. As that article points out, these effects may only kick in when you start, stop or change dosage - they may occur randomly in some patients and not others - or (as you suggest) after longer term usage.
These effects depend on the individual and on the drug in question and depending on what other drugs you are taking - and even on what things you eat (Grapefruit, for example, is notorious for inducing side effects in drugs).
So I very much doubt there is a definite period/dosage at which this might happen.
It would be easier to make a guess at this amount of time if we knew which specific sleeping pill you were asking about - but to be honest, that would be a violation of our "No Medical Advice" rule and we would not be able to help you. This is a question best asked of your doctor who can look at your specific situation, the drug you're talking about and whatever other drugs or dietary issues you may have. The[REDACTED] reference desk is a VERY bad place to get answers to this kind of question.
SteveBaker (talk) 14:22, 10 January 2016 (UTC)
It depends on whether you're talking about melatonin, antihistamines (like Benadryl or hydroxyzine), antipsychotics with antihistamine activity (like Seroquel or thorazine), drowsy antipsychotics without antihistamine activity (Latuda), benzodiazepines (like ativan) , or a Z-drug. Yanping Nora Soong (talk) 23:36, 10 January 2016 (UTC)

Wind direction confusion

I always see sat maps showing clouds/wind moving one way and weather reports showing it moving the other way. What's going on here? Thanks. Anna Frodesiak (talk) 01:22, 11 January 2016 (UTC)

Categories:
Misplaced Pages:Reference desk/Science: Difference between revisions Add topic