id
stringlengths
9
11
category
stringclasses
6 values
word_count
int64
62
2.45k
language
stringclasses
1 value
text
stringlengths
330
16.4k
text_norm
stringlengths
330
17.5k
long_en_243
news_en
615
en
A family is grieving after a Lake Orion man died south of Au Train Island Sunday afternoon. Police say the investigation is ongoing, but according to the victim’s brother, the man was on a boating trip to scatter their father’s ashes in Lake Superior. Fifty-nine-year-old Robert Louis of Lake Orion, Michigan, died shortly after he and six other passengers fell out of their 19-foot wooden kit boat near Au Train. The group had been honoring the final wishes of Louis’s father, who died two weeks earlier after years of battling Alzheimer’s. John “Joe” Louis said his brother had devoted his life to caring for their father during the last four years, and that two weeks after their father’s death they divided the ashes, placing half on their mother’s grave and the other half in Lake Superior. Michigan State Police were dispatched to a boating accident Sunday around 6:00 p.m. When troopers arrived, they found six boaters rescued by a nearby pontoon boat and being treated for hypothermia; the seventh passenger was not recovered immediately. A witness on shore told police he saw the boat take a sharp turn for reasons unknown, spilling everyone into the lake. Police say the throttle was still open when everyone fell out. MUNISING, MI — Police have identified the body of a Michigan man found in Lake Superior after a large wave washed seven people off a kit-made wooden boat during a weekend accident. The victim was Robert Louis, 59, of Lake Orion, Michigan State Police said Tuesday. The rescue and recovery effort took place near Pictured Rocks National Lakeshore. "What I gathered from the information was he tried to grab the boat as it was circling around him and the prop hit him, and I'm not sure if it made it so he couldn't swim or if it injured him bad enough to be life threatening," Alger County Sheriff's Department Officer A.J. Schirschmidt said. The Alger County Sheriff's Department and the U.S. Coast Guard searched the area for about 24 hours before the body was found. "We searched for quite a while above water when the helicopter was around," Schirschmidt said. "The Coast Guard got some better information of kind of where they figured the accident occurred so at that time we put two divers in the water." Robert Louis' body was found by the Alger County Sheriff's Department Monday afternoon around 3:00 p.m. south of AuTrain Island. "Just everybody he met was his friend, he was very helpful, kind," Joe Louis said. "The kind of guy that would run into a building on fire to help the people — incredible." If you would like to donate to the Bobby Louis Memorial, please visit the GoFundMe page. Watch TV6 and FOX UP for the latest information. Police initially said all those swept overboard in Sunday's accident near AuTrain Island had been on a pontoon boat. On Tuesday, police corrected that information, saying they were aboard a 19-foot wooden boat that had been constructed from a kit. The boat's owner, Tim Preston of Marquette, was on board along with Louis and five others near the island's south side when a large wave hit the boat, pushing everyone overboard. Louis attempted to swim back to the boat but drowned, police said. Authorities searched on Sunday and Monday before the Alger County dive team recovered his body Monday afternoon near the accident site. The other six passengers were rescued Sunday by a group on a pontoon boat that had been in the area and were treated for hypothermia.
A family is grieving after a Lake Orion man died south of Au Train Island Sunday afternoon. Police say the investigation is ongoing, but according to the victim’s brother, the man was on a boating trip to scatter their father’s ashes in Lake Superior. Fifty-nine-year-old Robert Louis of Lake Orion, Michigan, died shortly after he and six other passengers fell out of their nineteen-foot wooden kit boat near Au Train. The group had been honoring the final wishes of Louis’s father, who died two weeks earlier after years of battling Alzheimer’s. John “Joe” Louis said his brother had devoted his life to caring for their father during the last four years, and that two weeks after their father’s death they divided the ashes, placing half on their mother’s grave and the other half in Lake Superior. Michigan State Police were dispatched to a boating accident Sunday around six colon zero zero p.m. When troopers arrived, they found six boaters rescued by a nearby pontoon boat and being treated for hypothermia; the seventh passenger was not recovered immediately. A witness on shore told police he saw the boat take a sharp turn for reasons unknown, spilling everyone into the lake. Police say the throttle was still open when everyone fell out. MUNISING, MI — Police have identified the body of a Michigan man found in Lake Superior after a large wave washed seven people off a kit-made wooden boat during a weekend accident. The victim was Robert Louis, fifty-nine, of Lake Orion, Michigan State Police said Tuesday. The rescue and recovery effort took place near Pictured Rocks National Lakeshore. "What I gathered from the information was he tried to grab the boat as it was circling around him and the prop hit him, and I'm not sure if it made it so he couldn't swim or if it injured him bad enough to be life threatening," Alger County Sheriff's Department Officer A.J. Schirschmidt said. The Alger County Sheriff's Department and the U.S. Coast Guard searched the area for about twenty-four hours before the body was found. "We searched for quite a while above water when the helicopter was around," Schirschmidt said. "The Coast Guard got some better information of kind of where they figured the accident occurred so at that time we put two divers in the water." Robert Louis' body was found by the Alger County Sheriff's Department Monday afternoon around three p.m. south of AuTrain Island. "Just everybody he met was his friend, he was very helpful, kind," Joe Louis said. "The kind of guy that would run into a building on fire to help the people — incredible." If you would like to donate to the Bobby Louis Memorial, please visit the GoFundMe page. Watch TV6 and FOX UP for the latest information. Police initially said all those swept overboard in Sunday's accident near AuTrain Island had been on a pontoon boat. On Tuesday, police corrected that information, saying they were aboard a nineteen-foot wooden boat that had been constructed from a kit. The boat's owner, Tim Preston of Marquette, was on board along with Louis and five others near the island's south side when a large wave hit the boat, pushing everyone overboard. Louis attempted to swim back to the boat but drowned, police said. Authorities searched on Sunday and Monday before the Alger County dive team recovered his body Monday afternoon near the accident site. The other six passengers were rescued Sunday by a group on a pontoon boat that had been in the area and were treated for hypothermia.
long_en_203
news_en
725
en
The high resolution of 8K screens means you can stand very close to them without seeing the individual pixels. Sharp has announced plans to sell an 8K television from October. Although several companies have developed "super hi‑vision" resolution test models, this is the first such TV to be made commercially available. The 8K format provides 16 times as many pixels as 1080p high definition, creating an image so detailed it can appear three-dimensional. However, the 85‑inch (2.16 m) device's price of 16 million yen ($133,000; £86,000) is likely to limit sales. Interest is expected to come mainly from broadcasters and companies involved in testing the format. One analyst suggested it would not become a serious proposition for the general public until the turn of the decade. "We're not expecting 8K TVs targeted at consumers to be released until at least 2016, and we don't expect they will cross one million units until after 2019," said Abhi Mallick of IHS Technology. "Japan's NHK is the only broadcaster so far to announce plans to create and broadcast 8K content." He added that the relatively small size of homes in Japan might mean many families would not be interested. Japan is a region where the average size of TVs sold tends to be smaller, and the minimum size 8K TVs would likely be sold at is 65 inches. He added that, for the time being, manufacturers are expected to focus on convincing families to buy 4K sets instead. 4K TVs offer about four times the number of pixels of 1080p HD sets, but only a quarter as many as 8K TVs, and are being made in sizes of up to about 100 inches to create "cinema-like" home experiences. Due to technological restraints, owners of Sharp's LV-85001 will have to use workarounds to take advantage of its full capabilities. Its built-in TV tuner cannot receive broadcasts in 8K; instead, video must be fed via four separate HDMI cables to handle the required data bandwidth. The resulting image delivers 104 pixels per inch—about a fifth of the density of modern high-end smartphone displays, but enough to read small text or make out intricate details when standing close to the screen. While 8K content and cameras are still rare, the Japanese site AV Watch, which first reported Sharp's TV, suggested the product could be used in hospitals to give keyhole surgeons better imagery. Canon recently revealed it had an 8K video camera in development. Each frame would contain the same detail as a 33.2-megapixel photo. Another expert suggested the tech might appeal to marketers. "The attraction will be for commercial applications - video walls and things like that," said Chris Green, a tech consultant at Davies Murphy Group. "8K screens could offer a very interesting video alternative to today's shop window and billboard displays - which show static advertising - because their extreme clarity means they can show lots of text and would be as readable as a poster." Japan v China Japan's NHK streamed live 8K footage from last year's World Cup in Brazil and intends to begin public tests of the format over satellite in 2016. The corporation plans to show all of Tokyo's 2020 Olympic Games in 8K and begin regular broadcasts in super hi-vision resolution the same year. However, IHS Technology believes it could be China that drives initial demand for the technology. "China has had the fastest adoption of 4K televisions in the world - there is a demographic there of higher income consumers who want to buy TVs that will impress their neighbours," said Mr Mallick. "That's often regardless of whether they can accept the content." Many of the early 4K TVs shipped to China lacked HDMI 2.0 ports, so they could not receive 4K content that later became available from set-top boxes. Japanese electronics company Sharp plans to release the world's first commercially available 8K television in October; it will cost 16 million yen (about $133,000). 8K refers to the television's resolution, which is even sharper than 4K (ultra-high definition). 8K delivers 16 times the resolution of full HD. Sharp says the TV will go on sale on October 30 and will be an 85-inch set.
The high resolution of eight K screens means you can stand very close to them without seeing the individual pixels. Sharp has announced plans to sell an eight K television from October. Although several companies have developed "super hi‑vision" resolution test models, this is the first such TV to be made commercially available. The eight K format provides sixteen times as many pixels as one thousand eighty p high definition, creating an image so detailed it can appear three-dimensional. However, the eighty five‑inch (two point one six m) device's price of sixteen million yen (dollar one hundred thirty three thousand; pound eighty six thousand) is likely to limit sales. Interest is expected to come mainly from broadcasters and companies involved in testing the format. One analyst suggested it would not become a serious proposition for the general public until the turn of the decade. "We're not expecting eight K TVs targeted at consumers to be released until at least two thousand sixteen, and we don't expect they will cross one million units until after two thousand nineteen," said Abhi Mallick of IHS Technology. "Japan's NHK is the only broadcaster so far to announce plans to create and broadcast eight K content." He added that the relatively small size of homes in Japan might mean many families would not be interested. Japan is a region where the average size of TVs sold tends to be smaller, and the minimum size eight K TVs would likely be sold at is sixty five inches. He added that, for the time being, manufacturers are expected to focus on convincing families to buy four K sets instead. four K TVs offer about four times the number of pixels of one thousand eighty p HD sets, but only a quarter as many as eight K TVs, and are being made in sizes of up to about one hundred inches to create "cinema-like" home experiences. Due to technological restraints, owners of Sharp's LV-85001 will have to use workarounds to take advantage of its full capabilities. Its built-in TV tuner cannot receive broadcasts in eight K; instead, video must be fed via four separate HDMI cables to handle the required data bandwidth. The resulting image delivers one hundred four pixels per inch—about a fifth of the density of modern high-end smartphone displays, but enough to read small text or make out intricate details when standing close to the screen. While eight K content and cameras are still rare, the Japanese site AV Watch, which first reported Sharp's TV, suggested the product could be used in hospitals to give keyhole surgeons better imagery. Canon recently revealed it had an eight K video camera in development. Each frame would contain the same detail as a thirty-three point two-megapixel photo. Another expert suggested the tech might appeal to marketers. "The attraction will be for commercial applications - video walls and things like that," said Chris Green, a tech consultant at Davies Murphy Group. "eight K screens could offer a very interesting video alternative to today's shop window and billboard displays - which show static advertising - because their extreme clarity means they can show lots of text and would be as readable as a poster." Japan v China Japan's NHK streamed live eight K footage from last year's World Cup in Brazil and intends to begin public tests of the format over satellite in two thousand sixteen. The corporation plans to show all of Tokyo's two thousand twenty Olympic Games in eight K and begin regular broadcasts in super hi-vision resolution the same year. However, IHS Technology believes it could be China that drives initial demand for the technology. "China has had the fastest adoption of four K televisions in the world - there is a demographic there of higher income consumers who want to buy TVs that will impress their neighbours," said Mr Mallick. "That's often regardless of whether they can accept the content." Many of the early four K TVs shipped to China lacked HDMI two point zero ports, so they could not receive four K content that later became available from set-top boxes. Japanese electronics company Sharp plans to release the world's first commercially available eight K television in October; it will cost sixteen million yen (about dollar one hundred thirty three thousand). eight K refers to the television's resolution, which is even sharper than four K (ultra-high definition). eight K delivers sixteen times the resolution of full HD. Sharp says the TV will go on sale on October thirty and will be an eighty five-inch set.
long_en_333
poet_en
125
en
They had just created something they could not control. Now it was coming after their entire bloodline, and it would not stop until the Wells were extinct. He could not contain his anger; he slammed his fist against the window, cracking the glass. His plan was to put a huge distance between Alyson and the United States. He wished she had accepted; then everything would have been all right. Considering his options, he had only one that would work. He really did not want to do it; he didn't want to owe anyone a favor, especially not him. He started dialing a number on his phone. As soon as the person on the other end picked up, he said, "Klyde, I need a favor..."
They had just created something they could not control. Now it was coming after their entire bloodline, and it would not stop until the Wells were extinct. He could not contain his anger; he slammed his fist against the window, cracking the glass. His plan was to put a huge distance between Alyson and the United States. He wished she had accepted; then everything would have been all right. Considering his options, he had only one that would work. He really did not want to do it; he didn't want to owe anyone a favor, especially not him. He started dialing a number on his phone. As soon as the person on the other end picked up, he said, "Klyde, I need a favor...".
long_en_223
news_en
772
en
The United States and South Korea agreed to revise their six-year-old trade pact with a side deal to deter competitive currency devaluation by Seoul and with concessions for U.S. autos and pharmaceutical companies, Trump administration officials said on Tuesday. They told reporters that the deal includes provisions outlined by South Korean officials on Monday, including a 20-year extension of the 25 percent U.S. tariff on pickup trucks and a doubling of the Korean import cap on autos that meet U.S. specifications to 50,000 per manufacturer per year. The agreement, cobbled together quickly with only a few rounds of negotiations under Trump's threat of withdrawal, will include a side letter that requires South Korea to provide increased transparency of its foreign exchange interventions, with commitments to avoid won devaluations for competitive purposes. The currency deal, the final details of which are still being negotiated between the U.S. Treasury and South Korea's Ministry of Strategy and Finance, is considered a side letter that will not be enforceable with trade sanctions. Many U.S. lawmakers, particularly Democrats, had opposed the 2015 Trans-Pacific Partnership trade deal because it had a similar currency manipulation side agreement that could not be enforced. Nonetheless, the revised U.S.-South Korean Free Trade Agreement, known as KORUS. U.S. officials said the trade deal would go into force with a currency side agreement and would not require congressional approval. When asked whether the forex agreement would ensure South Korea is not labeled a manipulator in the U.S. Treasury’s next currency report, due April 15, officials declined to comment, saying such assessments will be made according to Treasury procedures. Improving foreign exchange policy transparency would mean disclosing how much money is spent to curb currency market volatility, a South Korean finance ministry official said. “It’s about time” South Korea disclosed FX intervention details, the official added, speaking on condition of anonymity because of the issue’s sensitivity. South Korea’s central bank is often suspected of buying or selling dollars to stem sharp fluctuations in the won, but there are no official figures on how much is spent on such “smoothing operations.” While Seoul says interventions are limited to smoothing operations, the practice has drawn criticism that it may devalue the currency to boost exporters’ competitiveness. Earlier in March, South Korean government officials said plans to disclose details of currency market intervention are under review. South Korea has been kept on a Treasury currency "monitoring list" because of its large global current account surplus and U.S. trade surplus. The Treasury’s October report urged Korea to enhance the transparency of its exchange-rate intervention. Officials confirmed that South Korea agreed to cut its steel exports to the United States by about 30 percent, allowing the rest to be excluded from steel tariffs. Korean aluminum producers remain subject to a 10 percent U.S. tariff. Other countries must agree to similar quotas to escape tariffs, but the size of the limits will vary. The United States is negotiating with Canada, Mexico, Brazil, the European Union, Australia, and Argentina. One official said it was "not a one-size-fits-all" approach and added that the South Korean quota was agreed because of its "unique" position in steel exports. South Korea imports and processes significant amounts of Chinese-made steel, much of which is under anti-dumping and anti-subsidy tariffs. “What will be the same for any country that is out from under the Section 232 tariff, as with Korea, is that there will be a hard quota,” the official added. South Korea also agreed to amend a government health program that pays premium prices to domestic drug companies to ensure a level playing field for U.S. pharmaceutical producers, U.S. officials said. A file photo shows rolled steel at a Hyundai Steel plant in Dangjin, about 130 km (81 miles) southwest of Seoul, June 15, 2011 (REUTERS/Lee Jae-Won). In addition to increased access for American vehicles that meet U.S.—but not necessarily South Korean—safety standards, U.S. officials said they secured reductions in non-tariff barriers to U.S. vehicle sales, including elimination of duplicate environmental testing requirements and recognition of U.S. replacement-parts standards. Officials said extending the phase-out period for U.S. pickup truck tariffs to 2041 would help ensure production does not migrate to South Korea, as some production moved to Mexico after the North American Free Trade Agreement eliminated the tariff. Reporting by Cynthia Kim and Shin-hyung Lee; Editing by James Dalgleish and Sandra Maler.
The United States and South Korea agreed to revise their six-year-old trade pact with a side deal to deter competitive currency devaluation by Seoul and with concessions for U.S. autos and pharmaceutical companies, Trump administration officials said on Tuesday. They told reporters that the deal includes provisions outlined by South Korean officials on Monday, including a twenty-year extension of the twenty-five percent U.S. tariff on pickup trucks and a doubling of the Korean import cap on autos that meet U.S. specifications to fifty thousand per manufacturer per year. The agreement, cobbled together quickly with only a few rounds of negotiations under Trump's threat of withdrawal, will include a side letter that requires South Korea to provide increased transparency of its foreign exchange interventions, with commitments to avoid won devaluations for competitive purposes. The currency deal, the final details of which are still being negotiated between the U.S. Treasury and South Korea's Ministry of Strategy and Finance, is considered a side letter that will not be enforceable with trade sanctions. Many U.S. lawmakers, particularly Democrats, had opposed the two thousand fifteen Trans-Pacific Partnership trade deal because it had a similar currency manipulation side agreement that could not be enforced. Nonetheless, the revised U.S.-South Korean Free Trade Agreement, known as KORUS. U.S. officials said the trade deal would go into force with a currency side agreement and would not require congressional approval. When asked whether the forex agreement would ensure South Korea is not labeled a manipulator in the U.S. Treasury’s next currency report, due April fifteen, officials declined to comment, saying such assessments will be made according to Treasury procedures. Improving foreign exchange policy transparency would mean disclosing how much money is spent to curb currency market volatility, a South Korean finance ministry official said. “It’s about time” South Korea disclosed FX intervention details, the official added, speaking on condition of anonymity because of the issue’s sensitivity. South Korea’s central bank is often suspected of buying or selling dollars to stem sharp fluctuations in the won, but there are no official figures on how much is spent on such “smoothing operations.” While Seoul says interventions are limited to smoothing operations, the practice has drawn criticism that it may devalue the currency to boost exporters’ competitiveness. Earlier in March, South Korean government officials said plans to disclose details of currency market intervention are under review. South Korea has been kept on a Treasury currency "monitoring list" because of its large global current account surplus and U.S. trade surplus. The Treasury’s October report urged Korea to enhance the transparency of its exchange-rate intervention. Officials confirmed that South Korea agreed to cut its steel exports to the United States by about thirty percent, allowing the rest to be excluded from steel tariffs. Korean aluminum producers remain subject to a ten percent U.S. tariff. Other countries must agree to similar quotas to escape tariffs, but the size of the limits will vary. The United States is negotiating with Canada, Mexico, Brazil, the European Union, Australia, and Argentina. One official said it was "not a one-size-fits-all" approach and added that the South Korean quota was agreed because of its "unique" position in steel exports. South Korea imports and processes significant amounts of Chinese-made steel, much of which is under anti-dumping and anti-subsidy tariffs. “What will be the same for any country that is out from under the Section two hundred thirty-two tariff, as with Korea, is that there will be a hard quota,” the official added. South Korea also agreed to amend a government health program that pays premium prices to domestic drug companies to ensure a level playing field for U.S. pharmaceutical producers, U.S. officials said. A file photo shows rolled steel at a Hyundai Steel plant in Dangjin, about one hundred thirty km (eighty-one miles) southwest of Seoul, June fifteen, two thousand eleven (REUTERS/Lee Jae-Won). In addition to increased access for American vehicles that meet U.S.—but not necessarily South Korean—safety standards, U.S. officials said they secured reductions in non-tariff barriers to U.S. vehicle sales, including elimination of duplicate environmental testing requirements and recognition of U.S. replacement-parts standards. Officials said extending the phase-out period for U.S. pickup truck tariffs to two thousand forty-one would help ensure production does not migrate to South Korea, as some production moved to Mexico after the North American Free Trade Agreement eliminated the tariff. Reporting by Cynthia Kim and Shin-hyung Lee; Editing by James Dalgleish and Sandra Maler.
long_en_237
news_en
835
en
In the United States, where about 4 percent of food imports come from Japan, the Food and Drug Administration has restricted some foods from that country. The agency is working with customs officials to screen incoming fish and other food for traces of radiation. So far that screening has identified seven items that required further testing to see if the detected radiation exceeded normal background levels, according to Siobhan Delancey, an FDA spokeswoman. Those items included tea and flavoring compounds. She said three of the items had been cleared for delivery and four were awaiting test results. Patricia A. Hansen, a senior scientist at the FDA, acknowledged that the radiation detection methods used to screen food imports were not sensitive enough to detect a single contaminated fish in a large shipment, but said small amounts of contamination did not represent a public health hazard. A person would have to consume large amounts of fish in excess of what are known as an "intervention level," or threshold level, of radiation for an extended period before it would be considered dangerous. "One fish that might be at an intervention level in a huge cargo container, we're not going to pick that up," she said. “But the important context is whether one fish at the intervention level is a public health concern. No, it is not.” Nicholas Fisher, a professor of marine sciences at the State University of New York at Stony Brook, said that, according to some radiation safety guidelines, people could safely eat 35 pounds of fish each year containing the level of cesium-137 detected in the Japanese fish. “You’re not going to die from eating it right away,” he said, “but we’re getting to levels where I would think twice about eating it.” All the talk about radioactive food in Japan, which earlier banned milk and other farm products from areas near the crippled plant, has made some people uneasy, even thousands of miles away. “When radioactive material started going into the ocean, that raised my concern greatly,” Karen Werner, 68, said as she shopped for fish at 99 Ranch Market in Richmond, California. “Right now, I’m not too worried about it showing up in fish, but I’m keeping my eye on it.” Lee Nakamura, a partner who manages the fish counter at Tokyo Fish Market in Berkeley, California, estimated that one in five customers asked about possible radiation, but he had not yet seen an impact on sales. He said his Japanese suppliers had assured him that fish were being tested for possible radiation. “Everything is under a microscope right now,” Mr. Nakamura said. “I feel confident the fish is safe. Everyone in Japan and here is looking at it and double-checking it before it gets to us.” Several restaurant owners and fish importers said that while they continued to buy some fish from Japan, it came from areas far from the reactor site. Still, Scott Rosenberg, an owner of Sushi Yasuda, a highly regarded sushi restaurant in Manhattan, said he planned to buy a radiation detector and would post a notice on the restaurant’s website to let customers know about the testing. “We want to make sure there is no exposure,” he said. Other segments of the food industry are also grappling with how to respond to radiation concerns. Sensitive monitoring devices and tests have detected trace amounts of radioactive material from Japan in the air and water in many states. Tests in Arizona, California and Washington State have found minuscule amounts in milk, leading to concern among dairy farmers. Everything detected has been well below levels considered dangerous, but food companies realize that consumers may still need to be reassured. In California, Will Daniels, senior vice president for food safety at Earthbound Farm, a major producer of organic salad greens, said the company was prepared to test soil and greens for radiation if concerns persisted or fallout from Japan intensified. "The likelihood of contamination on the West Coast is extremely low, so it’s really important that we’re monitoring appropriately and not creating panic," Daniels said. "But we certainly need to make sure we’re doing the appropriate thing and are ready to respond." Cliff Coles, a consultant who works with Earthbound and other produce companies and food-ingredient importers on food-safety issues, said he had ordered two radiation detectors and was planning to take them into fields where greens, tomatoes and peppers would be grown this spring. He said he would work with Earthbound’s growers to make sure the fish-emulsion fertilizer they use was tested for radiation. "We’re just trying to get our clients to be proactive and say that, while this may not be the end-all solution, let’s take a look at what’s going on around us before we get blindsided," Coles said. Consumer worries about radiation have led to a big boom in sales of one food that often comes from Japan: seaweed.
In the United States, where about four percent of food imports come from Japan, the Food and Drug Administration has restricted some foods from that country. The agency is working with customs officials to screen incoming fish and other food for traces of radiation. So far that screening has identified seven items that required further testing to see if the detected radiation exceeded normal background levels, according to Siobhan Delancey, an FDA spokeswoman. Those items included tea and flavoring compounds. She said three of the items had been cleared for delivery and four were awaiting test results. Patricia A. Hansen, a senior scientist at the FDA, acknowledged that the radiation detection methods used to screen food imports were not sensitive enough to detect a single contaminated fish in a large shipment, but said small amounts of contamination did not represent a public health hazard. A person would have to consume large amounts of fish in excess of what are known as an "intervention level," or threshold level, of radiation for an extended period before it would be considered dangerous. "One fish that might be at an intervention level in a huge cargo container, we're not going to pick that up," she said. “But the important context is whether one fish at the intervention level is a public health concern. No, it is not.” Nicholas Fisher, a professor of marine sciences at the State University of New York at Stony Brook, said that, according to some radiation safety guidelines, people could safely eat thirty five pounds of fish each year containing the level of cesium one hundred thirty seven detected in the Japanese fish. “You’re not going to die from eating it right away,” he said, “but we’re getting to levels where I would think twice about eating it.” All the talk about radioactive food in Japan, which earlier banned milk and other farm products from areas near the crippled plant, has made some people uneasy, even thousands of miles away. “When radioactive material started going into the ocean, that raised my concern greatly,” Karen Werner, sixty eight, said as she shopped for fish at 99 Ranch Market in Richmond, California. “Right now, I’m not too worried about it showing up in fish, but I’m keeping my eye on it.” Lee Nakamura, a partner who manages the fish counter at Tokyo Fish Market in Berkeley, California, estimated that one in five customers asked about possible radiation, but he had not yet seen an impact on sales. He said his Japanese suppliers had assured him that fish were being tested for possible radiation. “Everything is under a microscope right now,” Mr. Nakamura said. “I feel confident the fish is safe. Everyone in Japan and here is looking at it and double-checking it before it gets to us.” Several restaurant owners and fish importers said that while they continued to buy some fish from Japan, it came from areas far from the reactor site. Still, Scott Rosenberg, an owner of Sushi Yasuda, a highly regarded sushi restaurant in Manhattan, said he planned to buy a radiation detector and would post a notice on the restaurant’s website to let customers know about the testing. “We want to make sure there is no exposure,” he said. Other segments of the food industry are also grappling with how to respond to radiation concerns. Sensitive monitoring devices and tests have detected trace amounts of radioactive material from Japan in the air and water in many states. Tests in Arizona, California and Washington State have found minuscule amounts in milk, leading to concern among dairy farmers. Everything detected has been well below levels considered dangerous, but food companies realize that consumers may still need to be reassured. In California, Will Daniels, senior vice president for food safety at Earthbound Farm, a major producer of organic salad greens, said the company was prepared to test soil and greens for radiation if concerns persisted or fallout from Japan intensified. "The likelihood of contamination on the West Coast is extremely low, so it’s really important that we’re monitoring appropriately and not creating panic," Daniels said. "But we certainly need to make sure we’re doing the appropriate thing and are ready to respond." Cliff Coles, a consultant who works with Earthbound and other produce companies and food-ingredient importers on food-safety issues, said he had ordered two radiation detectors and was planning to take them into fields where greens, tomatoes and peppers would be grown this spring. He said he would work with Earthbound’s growers to make sure the fish-emulsion fertilizer they use was tested for radiation. "We’re just trying to get our clients to be proactive and say that, while this may not be the end-all solution, let’s take a look at what’s going on around us before we get blindsided," Coles said. Consumer worries about radiation have led to a big boom in sales of one food that often comes from Japan: seaweed.
long_en_309
wiki_en
191
en
This study examines fundamental computer algorithms, which are the basis of computer programs. Without algorithms, no programs would exist. It involves studying the mathematical functions behind computational algorithms, basic theory, and both functional and low-level programming. In an academic setting, the area introduces the fundamental mathematical theorems and functions of theoretical computer science, which are the building blocks for other areas of the field. Complex topics such as proofs, algebraic functions, and set theory are introduced in CIS studies. Information and computer science is a rapidly developing field with strong job prospects: 75.7% of graduates gain employment. The IT industry employs one in twenty workers in the UK, and its growth is predicted to be nearly five times faster than the national average. Between 2012 and 2017, more than half a million people were projected to be needed in the industry. Nine out of ten tech firms report candidate shortages, which negatively impact their business by delaying new product development. In the US, it is predicted that, over the next decade, there will be more than one million more technology-sector jobs than computer science graduates to fill them.
This study examines fundamental computer algorithms, which are the basis of computer programs. Without algorithms, no programs would exist. It involves studying the mathematical functions behind computational algorithms, basic theory, and both functional and low-level programming. In an academic setting, the area introduces the fundamental mathematical theorems and functions of theoretical computer science, which are the building blocks for other areas of the field. Complex topics such as proofs, algebraic functions, and set theory are introduced in CIS studies. Information and computer science is a rapidly developing field with strong job prospects: seventy five point seven percent of graduates gain employment. The IT industry employs one in twenty workers in the UK, and its growth is predicted to be nearly five times faster than the national average. Between two thousand twelve and two thousand seventeen, more than half a million people were projected to be needed in the industry. Nine out of ten tech firms report candidate shortages, which negatively impact their business by delaying new product development. In the US, it is predicted that, over the next decade, there will be more than one million more technology-sector jobs than computer science graduates to fill them.
long_en_164
paper_en
533
en
LLaVA also demonstrates impressive OCR (optical character recognition) ability, which is rarely covered in our training data. We hope these additional results and observations showcase the potential of LLaVA in various application areas. In future work, it is important to investigate these emergent behaviors more thoroughly and to understand the underlying mechanisms that enable LLaVA to demonstrate such generalization abilities. This will pave the way towards building better LMMs, including enhancing robustness, reducing biases, and improving the alignment and the scope of the learned vision-language representations. Training Details We pre-train our model on the filtered CC-595K subset for 1 epoch with a learning rate of 2e-3 and a batch size of 128, and fine-tune on the proposed LLaVA-Instruct-158K dataset for 3 epochs, with a learning rate of 2e-5 and a batch size of 32. Following Vicuna, we use the Adam optimizer with no weight decay and a cosine learning rate with a warmup ratio of 3%. During finetuning, FSDP (Full Shard Data Parallel) and gradient checkpointing is used to save GPU memory, and offloading is not used. BF16 and TF32 are enabled to achieve a balance between speed and precision. We train all models with 8 A100s. Pretraining on CC-595K completes within 4 hours. Finetuning on Instruct-158K completes within 10 hours. Finetuning on ScienceQA completes within 4 hours. Data Instructions for brief image description. The list of instructions used to briefly describe the image content present the same meaning with natural language variance. Instructions for detailed image description. The list of instructions used to describe the image content in detail present the same meaning with natural language variance. CC3M. We extract noun-phrases using Spacy for each caption over the whole CC3M dataset, and count the frequency of each unique noun-phrase. We skip noun-phrases whose frequency is smaller than 3, as they are usually rare combinations of concept and attributes that have already been covered by other captions. We then start from the noun-phrases with the lowest remaining frequency, and add the captions that contain this noun-phrase to the candidate pool. If the frequency of the noun-phrase is larger than 100, we randomly choose a subset of size 100 out of all its captions. This results in around 595K image-text pairs. The filtered dataset shows a good coverage of concepts whose frequency is higher than 3, but with a smaller number of image-text pairs. Prompts The prompt used to generate image-based conversation from ChatGPT/GPT-4 is constructed using few-shot in-context-learning. The system message instructs the model to act as an AI visual assistant seeing a single image, described by five sentences. It is asked to design a diverse conversation about the photo, including questions about visual content like object types, counts, actions, and locations. It is also asked to include complex questions about background knowledge or events in the image, providing detailed, well-organized answers. The prompt emphasizes asking only questions that can be answered confidently from the image. Following the system message, a series of few-shot examples (user context and assistant response) are provided before the final user query.
LLaVA also demonstrates impressive OCR (optical character recognition) ability, which is rarely covered in our training data. We hope these additional results and observations showcase the potential of LLaVA in various application areas. In future work, it is important to investigate these emergent behaviors more thoroughly and to understand the underlying mechanisms that enable LLaVA to demonstrate such generalization abilities. This will pave the way towards building better LMMs, including enhancing robustness, reducing biases, and improving the alignment and the scope of the learned vision-language representations. Training Details We pre-train our model on the filtered CC- five hundred ninety five K subset for one epoch with a learning rate of two e minus three and a batch size of one hundred twenty eight, and fine-tune on the proposed LLaVA-Instruct- one hundred fifty eight K dataset for three epochs, with a learning rate of two e minus five and a batch size of thirty two. Following Vicuna, we use the Adam optimizer with no weight decay and a cosine learning rate with a warmup ratio of three percent. During finetuning, FSDP (Full Shard Data Parallel) and gradient checkpointing is used to save GPU memory, and offloading is not used. BF one six and TF three two are enabled to achieve a balance between speed and precision. We train all models with eight A one hundred s. Pretraining on CC- five hundred ninety five K completes within four hours. Finetuning on Instruct- one hundred fifty eight K completes within ten hours. Finetuning on ScienceQA completes within four hours. Data Instructions for brief image description. The list of instructions used to briefly describe the image content present the same meaning with natural language variance. Instructions for detailed image description. The list of instructions used to describe the image content in detail present the same meaning with natural language variance. CC3M. We extract noun-phrases using Spacy for each caption over the whole CC3M dataset, and count the frequency of each unique noun-phrase. We skip noun-phrases whose frequency is smaller than three, as they are usually rare combinations of concept and attributes that have already been covered by other captions. We then start from the noun-phrases with the lowest remaining frequency, and add the captions that contain this noun-phrase to the candidate pool. If the frequency of the noun-phrase is larger than one hundred, we randomly choose a subset of size one hundred out of all its captions. This results in around five hundred ninety-five thousand image-text pairs. The filtered dataset shows a good coverage of concepts whose frequency is higher than three, but with a smaller number of image-text pairs. Prompts The prompt used to generate image-based conversation from ChatGPT/GPT-four is constructed using few-shot in-context-learning. The system message instructs the model to act as an AI visual assistant seeing a single image, described by five sentences. It is asked to design a diverse conversation about the photo, including questions about visual content like object types, counts, actions, and locations. It is also asked to include complex questions about background knowledge or events in the image, providing detailed, well-organized answers. The prompt emphasizes asking only questions that can be answered confidently from the image. Following the system message, a series of few-shot examples (user context and assistant response) are provided before the final user query.
long_en_124
paper_en
1,042
en
We also provide a unified paradigm to understand different representative training methods. Within this paradigm, all methods are conceptualized as either direct or simplified RL techniques. As summarized, there exist three key components: Data Source, Algorithm, and Reward Function. We provide some potential future directions about the three components. For the Data Source, which is the raw material of all training methods, we think this is a potential reason that our RL pipeline only improves the Maj@K performance. In the future, we will explore our RL pipeline on out-of-distribution question prompts, in conjunction with advanced sampling (decoding) strategies, like those based on tree-search methods. Also, efficient inference techniques, which determine the exploration efficiency of policy models, also play an exceedingly important role. For Algorithms, which process the data and reward signal to the gradient coefficient, to some extent, all methods now fully trust the signal of the reward function to increase or decrease the conditional probability of a certain token. However, it is impossible to ensure the reward signal is always reliable, especially in extremely complex tasks. For example, even the PRM800K datasets, which have been carefully annotated by well-trained annotators, still contain approximately 20% of incorrect annotations. To this end, we will explore the reinforcement learning algorithm that is robust against noisy reward signals. We believe such weak-to-strong alignment methods will bring a fundamental change to the learning algorithms. For the Reward Function, which is the source of the training signal, we think there exist three important directions for reward models: 1) How to enhance the generalization ability of the reward model. The reward model must be effectively generalized to handle out-of-distribution questions and advanced decoding outputs; otherwise, reinforcement learning may merely stabilize the distribution of LLMs rather than improve their fundamental capabilities. 2) How to reflect the uncertainty of the reward model. The uncertainty could potentially act as a linking bridge between the weak reward model and the weak-to-strong learning algorithms. 3) How to efficiently build high-quality process reward models that can provide fine-grained training signals for the reasoning process. Conclusion, Limitation, and Future Work We present DeepSeekMath, which outperforms all open-source models on the competition-level MATH benchmark and approaches the performance of closed models. DeepSeekMath is initialized with DeepSeek-Coder-v1.5 7B and undergoes continual training for 500B tokens, with a significant component of the training data being 120B math tokens sourced from Common Crawl. Our extensive ablation study shows web pages offer significant potential for high-quality mathematical data, while arXiv may not as beneficial as we expected. We introduce Group Relative Policy Optimization (GRPO), a variant of Proximal Policy Optimization (PPO), which can notably improve mathematical reasoning capabilities with less memory consumption. The experiment results show that GRPO is effective even if DeepSeekMath-Instruct 7B has reached a high score on benchmarks. We also provide a unified paradigm to understand a series of methods and summarize several potential directions for more effective reinforcement learning. Although DeepSeekMath achieves impressive scores on quantitative reasoning benchmarks, its capability on geometry and theorem-proof are relatively weaker than closed models. For instance, in our dry run, the model cannot handle problems related to triangles and ellipses, which may indicate data selection bias in pre-training and fine-tuning. In addition, restricted by the model scale, DeepSeekMath is worse than GPT-4 on few-shot capability. GPT-4 could improve its performance with few-shot inputs, while DeepSeekMath shows similar performance in zero-shot and few-shot evaluation. In the future, we will further improve our engineered data selection pipeline to construct more high-quality pre-trained corpus. In addition, we will explore the potential directions for more effective reinforcement learning of LLMs. Appendix Analysis of Reinforcement Learning We provide the detailed derivation of the data source and gradient coefficient (algorithm and reward function) across various methods, including SFT, RFT, Online RFT, DPO, PPO, and GRPO. Supervised Fine-tuning The objective of Supervised Fine-tuning is maximizing the log probability of the target sequence. The Data Source is the dataset employed for SFT. The Reward Function can be regarded as human selection. The Gradient Coefficient is always set to 1. Rejection Sampling Fine-tuning Rejection Sampling Fine-tuning first samples multiple outputs from the supervised fine-tuned LLMs for each question, and then trains LLMs on the sampled outputs with the correct answer. The Data Source is a question in the SFT dataset with outputs sampled from the SFT model. The Reward Function is a Rule (whether the answer is correct or not). The Gradient Coefficient is 1 if the answer is correct, and 0 if it is incorrect. Online Rejection Sampling Fine-tuning The only difference between RFT and Online RFT is that the outputs of Online RFT are sampled from the real-time policy model, rather than from the SFT model. Direct Preference Optimization (DPO) The objective of DPO is to optimize based on preferences between pairs of outputs. The Data Source is a question in the SFT dataset with outputs sampled from the SFT model. The Reward Function is human preference in the general domain (or can be a 'Rule' in mathematical tasks). The Gradient Coefficient is a function of the log probabilities of the preferred and rejected responses under the policy and reference models. Proximal Policy Optimization (PPO) The objective of PPO is to maximize a clipped surrogate objective involving the ratio of probabilities between the new and old policies, weighted by the advantage function. The Data Source is a question in the SFT dataset with outputs sampled from the policy model. The Reward Function is a reward model. The Gradient Coefficient is the advantage, which is computed by applying Generalized Advantage Estimation (GAE), based on the rewards and a learned value function. Group Relative Policy Optimization (GRPO) The objective of GRPO is to maximize an objective based on group-relative advantages. The Data Source is a question in the SFT dataset with outputs sampled from the policy model. The Reward Function is a reward model. The Gradient Coefficient is the group-relative advantage plus a KL-divergence term for regularization, where the advantage is computed based on the group reward scores.
We also provide a unified paradigm to understand different representative training methods. Within this paradigm, all methods are conceptualized as either direct or simplified RL techniques. As summarized, there exist three key components: Data Source, Algorithm, and Reward Function. We provide some potential future directions about the three components. For the Data Source, which is the raw material of all training methods, we think this is a potential reason that our RL pipeline only improves the Maj at K performance. In the future, we will explore our RL pipeline on out-of-distribution question prompts, in conjunction with advanced sampling (decoding) strategies, like those based on tree-search methods. Also, efficient inference techniques, which determine the exploration efficiency of policy models, also play an exceedingly important role. For Algorithms, which process the data and reward signal to the gradient coefficient, to some extent, all methods now fully trust the signal of the reward function to increase or decrease the conditional probability of a certain token. However, it is impossible to ensure the reward signal is always reliable, especially in extremely complex tasks. For example, even the PRM eight hundred K datasets, which have been carefully annotated by well-trained annotators, still contain approximately twenty percent of incorrect annotations. To this end, we will explore the reinforcement learning algorithm that is robust against noisy reward signals. We believe such weak-to-strong alignment methods will bring a fundamental change to the learning algorithms. For the Reward Function, which is the source of the training signal, we think there exist three important directions for reward models: one) How to enhance the generalization ability of the reward model. The reward model must be effectively generalized to handle out-of-distribution questions and advanced decoding outputs; otherwise, reinforcement learning may merely stabilize the distribution of LLMs rather than improve their fundamental capabilities. two) How to reflect the uncertainty of the reward model. The uncertainty could potentially act as a linking bridge between the weak reward model and the weak-to-strong learning algorithms. three) How to efficiently build high-quality process reward models that can provide fine-grained training signals for the reasoning process. Conclusion, Limitation, and Future Work We present DeepSeekMath, which outperforms all open-source models on the competition-level MATH benchmark and approaches the performance of closed models. DeepSeekMath is initialized with DeepSeek-Coder-v one point five seven B and undergoes continual training for five hundred B tokens, with a significant component of the training data being one hundred twenty B math tokens sourced from Common Crawl. Our extensive ablation study shows web pages offer significant potential for high-quality mathematical data, while arXiv may not as beneficial as we expected. We introduce Group Relative Policy Optimization (GRPO), a variant of Proximal Policy Optimization (PPO), which can notably improve mathematical reasoning capabilities with less memory consumption. The experiment results show that GRPO is effective even if DeepSeekMath-Instruct seven B has reached a high score on benchmarks. We also provide a unified paradigm to understand a series of methods and summarize several potential directions for more effective reinforcement learning. Although DeepSeekMath achieves impressive scores on quantitative reasoning benchmarks, its capability on geometry and theorem-proof are relatively weaker than closed models. For instance, in our dry run, the model cannot handle problems related to triangles and ellipses, which may indicate data selection bias in pre-training and fine-tuning. In addition, restricted by the model scale, DeepSeekMath is worse than GPT four on few-shot capability. GPT four could improve its performance with few-shot inputs, while DeepSeekMath shows similar performance in zero-shot and few-shot evaluation. In the future, we will further improve our engineered data selection pipeline to construct more high-quality pre-trained corpus. In addition, we will explore the potential directions for more effective reinforcement learning of LLMs. Appendix Analysis of Reinforcement Learning We provide the detailed derivation of the data source and gradient coefficient (algorithm and reward function) across various methods, including SFT, RFT, Online RFT, DPO, PPO, and GRPO. Supervised Fine-tuning The objective of Supervised Fine-tuning is maximizing the log probability of the target sequence. The Data Source is the dataset employed for SFT. The Reward Function can be regarded as human selection. The Gradient Coefficient is always set to one. Rejection Sampling Fine-tuning Rejection Sampling Fine-tuning first samples multiple outputs from the supervised fine-tuned LLMs for each question, and then trains LLMs on the sampled outputs with the correct answer. The Data Source is a question in the SFT dataset with outputs sampled from the SFT model. The Reward Function is a Rule (whether the answer is correct or not). The Gradient Coefficient is one if the answer is correct, and zero if it is incorrect. Online Rejection Sampling Fine-tuning The only difference between RFT and Online RFT is that the outputs of Online RFT are sampled from the real-time policy model, rather than from the SFT model. Direct Preference Optimization (DPO) The objective of DPO is to optimize based on preferences between pairs of outputs. The Data Source is a question in the SFT dataset with outputs sampled from the SFT model. The Reward Function is human preference in the general domain (or can be a 'Rule' in mathematical tasks). The Gradient Coefficient is a function of the log probabilities of the preferred and rejected responses under the policy and reference models. Proximal Policy Optimization (PPO) The objective of PPO is to maximize a clipped surrogate objective involving the ratio of probabilities between the new and old policies, weighted by the advantage function. The Data Source is a question in the SFT dataset with outputs sampled from the policy model. The Reward Function is a reward model. The Gradient Coefficient is the advantage, which is computed by applying Generalized Advantage Estimation (GAE), based on the rewards and a learned value function. Group Relative Policy Optimization (GRPO) The objective of GRPO is to maximize an objective based on group-relative advantages. The Data Source is a question in the SFT dataset with outputs sampled from the policy model. The Reward Function is a reward model. The Gradient Coefficient is the group-relative advantage plus a KL-divergence term for regularization, where the advantage is computed based on the group reward scores.
long_en_336
poet_en
62
en
The Clarks always gave us Clark bars. The Flynns would give out apples (every kid's dream). It was a good thing that I liked Smarties (we used to use them for pills when we played doctor) because we had a plastic pumpkin full of them. The school party was pretty cool. We would have candy and cupcakes with candy corn decorations.
The Clarks always gave us Clark bars. The Flynns would give out apples (every kid's dream). It was a good thing that I liked Smarties (we used to use them for pills when we played doctor) because we had a plastic pumpkin full of them. The school party was pretty cool. We would have candy and cupcakes with candy corn decorations.
long_en_261
wiki_en
815
en
Population, health, and the environment (PHE) is an integrated approach to human development that links family planning and health with conservation to achieve better outcomes for both people and ecosystems than single‑sector efforts. There is a deep relationship among population, health, and the environment, and they are interconnected with other factors that help maintain those links. More than one billion people live in ecological hotspots—many in remote areas of critical biodiversity under intense human pressure. Conservation work is often focused in these places, while local communities frequently face poor health due to limited access to health services and family planning, inadequate nutrition, and insufficient water and sanitation. Economic hardship and livelihoods dependent on natural resources and small‑scale agriculture can force people to use resources unsustainably, especially under pressures such as rapid population growth and health problems. That unsustainable use damages ecosystems and biodiversity, and human health in turn depends on healthy environments. The surrounding ecosystem provides goods and services—water, food, medicine, fuelwood, building materials, and other resources. Damage or disruption of these services can have severe consequences for human health. Population, Health, and Environment (PHE) projects work to create healthier communities and ecosystems by expanding health services in remote areas, increasing participation in conservation, and promoting family planning to slow population growth that strains natural resources. PHE simultaneously improves access to health care and helps communities manage natural resources to bolster health and livelihoods while protecting the environment. It also pursues synergies between human and ecosystem health through sustainable resource management, livelihood support, food security and nutrition programs, and habitat and ecosystem restoration. By aligning community well‑being with environmental conservation, PHE conserves biodiversity and improves local environmental health, demonstrating positive results across multiple sectors. History of PHE In the late 1980s, conservation organizations and practitioners began to see the benefits of improving people's quality of life by managing biodiversity and natural resources. These efforts were initially called integrated conservation and development projects (ICDPs) and addressed a wide range of community development needs. By the late 1990s it became clear that many ICDPs were not achieving conservation or development goals as successfully as expected, often because their scope was too broad. A key lesson was that projects succeed when they focus on a few targeted interventions and avoid excessive complexity. Drawing on these lessons, the conservation sector developed the PHE approach and a new generation of integrated projects. Program designers found that biodiversity conservation in developing countries produced the best long-term results when local people perceived the efforts as serving their economic and cultural interests. To be successful, ICDPs must consider the linkages between conservation and development objectives in each unique location; the first step is identifying where these goals intersect. Development interventions should be introduced where these intersections occur. One of the most important lessons from ICDPs over the past 20 years is that failing to involve project beneficiaries equitably as partners in all phases of implementation—from design through evaluation—has led to consistently disappointing results. Local participants are not a homogeneous group but differ widely in access to and dependence on resources, economic status, and vulnerability to environmental change. Lessons from past ICDPs led to the development of stakeholder analysis tools that identify individuals and groups with a stake in the project and incorporate them into every stage of design and implementation. Drawing on those experiences, the conservation sector piloted the PHE approach in the 1990s with the first generation of integrated projects. Since then, USAID, the David and Lucile Packard Foundation, Johnson & Johnson, and the Summit Foundation have worked to strengthen the approach. The PHE approach has advanced with the United Nations Millennium Development Goals (MDGs) in mind and has demonstrated the kinds of synergies needed to help achieve those goals. Past PHE Project Profiles Successful Communities from Ridge to Reef, Kenya This project was implemented in an area designated a UNESCO reserve, the Kiunga National Marine Reserve. The reserve has outstanding marine biodiversity of over 11,000 species, 60–70% of which are endemic to the Indo‑Pacific Ocean. The World Wildlife Fund (WWF), with funding from USAID, provided communities that depend on the reserve’s resources with access to family planning through a mobile clinic. Members of the community began to actively participate in conservation activities once they knew WWF was willing to help them meet basic health needs. Healthy Families, Healthy Forests, Cambodia When refugees from the Cambodia–Vietnam war in the 1970s returned home, they found their land had been destroyed by logging and agriculture. With USAID support, Conservation International helped the Khmer Daeum refugees replenish their land and provided them with their first access to family planning. Healthy families were able to develop long-term plans for sustainable land use, and local women created associations that mobilized the community to increase income-earning opportunities and strengthen participation in conservation activities.
Population, health, and the environment (PHE) is an integrated approach to human development that links family planning and health with conservation to achieve better outcomes for both people and ecosystems than single‑sector efforts. There is a deep relationship among population, health, and the environment, and they are interconnected with other factors that help maintain those links. More than one billion people live in ecological hotspots—many in remote areas of critical biodiversity under intense human pressure. Conservation work is often focused in these places, while local communities frequently face poor health due to limited access to health services and family planning, inadequate nutrition, and insufficient water and sanitation. Economic hardship and livelihoods dependent on natural resources and small‑scale agriculture can force people to use resources unsustainably, especially under pressures such as rapid population growth and health problems. That unsustainable use damages ecosystems and biodiversity, and human health in turn depends on healthy environments. The surrounding ecosystem provides goods and services—water, food, medicine, fuelwood, building materials, and other resources. Damage or disruption of these services can have severe consequences for human health. Population, Health, and Environment (PHE) projects work to create healthier communities and ecosystems by expanding health services in remote areas, increasing participation in conservation, and promoting family planning to slow population growth that strains natural resources. PHE simultaneously improves access to health care and helps communities manage natural resources to bolster health and livelihoods while protecting the environment. It also pursues synergies between human and ecosystem health through sustainable resource management, livelihood support, food security and nutrition programs, and habitat and ecosystem restoration. By aligning community well‑being with environmental conservation, PHE conserves biodiversity and improves local environmental health, demonstrating positive results across multiple sectors. History of PHE In the late nineteen eighties, conservation organizations and practitioners began to see the benefits of improving people's quality of life by managing biodiversity and natural resources. These efforts were initially called integrated conservation and development projects (ICDPs) and addressed a wide range of community development needs. By the late nineteen nineties it became clear that many ICDPs were not achieving conservation or development goals as successfully as expected, often because their scope was too broad. A key lesson was that projects succeed when they focus on a few targeted interventions and avoid excessive complexity. Drawing on these lessons, the conservation sector developed the PHE approach and a new generation of integrated projects. Program designers found that biodiversity conservation in developing countries produced the best long-term results when local people perceived the efforts as serving their economic and cultural interests. To be successful, ICDPs must consider the linkages between conservation and development objectives in each unique location; the first step is identifying where these goals intersect. Development interventions should be introduced where these intersections occur. One of the most important lessons from ICDPs over the past twenty years is that failing to involve project beneficiaries equitably as partners in all phases of implementation—from design through evaluation—has led to consistently disappointing results. Local participants are not a homogeneous group but differ widely in access to and dependence on resources, economic status, and vulnerability to environmental change. Lessons from past ICDPs led to the development of stakeholder analysis tools that identify individuals and groups with a stake in the project and incorporate them into every stage of design and implementation. Drawing on those experiences, the conservation sector piloted the PHE approach in the nineteen nineties with the first generation of integrated projects. Since then, USAID, the David and Lucile Packard Foundation, Johnson & Johnson, and the Summit Foundation have worked to strengthen the approach. The PHE approach has advanced with the United Nations Millennium Development Goals (MDGs) in mind and has demonstrated the kinds of synergies needed to help achieve those goals. Past PHE Project Profiles Successful Communities from Ridge to Reef, Kenya This project was implemented in an area designated a UNESCO reserve, the Kiunga National Marine Reserve. The reserve has outstanding marine biodiversity of over eleven thousand species, sixty to seventy percent of which are endemic to the Indo‑Pacific Ocean. The World Wildlife Fund (WWF), with funding from USAID, provided communities that depend on the reserve’s resources with access to family planning through a mobile clinic. Members of the community began to actively participate in conservation activities once they knew WWF was willing to help them meet basic health needs. Healthy Families, Healthy Forests, Cambodia When refugees from the Cambodia–Vietnam war in the nineteen seventies returned home, they found their land had been destroyed by logging and agriculture. With USAID support, Conservation International helped the Khmer Daeum refugees replenish their land and provided them with their first access to family planning. Healthy families were able to develop long-term plans for sustainable land use, and local women created associations that mobilized the community to increase income-earning opportunities and strengthen participation in conservation activities.
long_en_233
news_en
674
en
April 19, 2018 — Procter & Gamble Co. has agreed to acquire Merck KGaA’s consumer health unit for €3.4 billion ($4.2 billion), gaining vitamin brands such as Seven Seas and greater exposure to Latin American and Asian markets. The maker of Pampers diapers and Gillette razors said the deal would expand its portfolio of consumer healthcare products, which includes Vicks cold relief. The Merck unit also includes vitamin brands Femibion and Neurobion. Healthcare currently accounts for 12% of P&G’s group sales. Merck will terminate its consumer care joint venture with Teva in July and plans to update its earnings guidance at its Q1 results. The deal follows GlaxoSmithKline’s agreement to buy Novartis out of their consumer healthcare joint venture for $13 billion after dropping its pursuit of Pfizer’s consumer unit. Pfizer has struggled to divest the business for as much as $20 billion after Reckitt Benckiser dropped out last month and Johnson & Johnson stepped away in January. Prescription-free remedies offer stable sales due to customer brand loyalty, albeit at lower margins than pharmaceuticals, but intense price competition online, mainly from Amazon, and cheaper store-brand products have weighed on profits in the U.S. and other Western markets. U.S.-based P&G derived 12% of group sales, or $7.5 billion, from health care products last year, including Oral-B toothbrushes and toothpastes. The purchase price for Merck’s business suggests the German company backed down from price demands of as much as €4 billion, which sources told Reuters had deterred initial suitors such as Nestlé, Perrigo and Stada owners Bain and Cinven. Morgan Stanley analyst Vincent Meunier said the price still implied a valuation of 4.7 times sales and around 19 times operating profit (EBITDA) for the business, at the high end of recent deals in the sector. "This will help Merck focus on its pharma unit and refurbish its pipeline," he said. Merck shares were up 0.5% at 08:33 GMT, among the top gainers in the German blue-chip DAX index, having risen 1.2% earlier. Merck said it fetched a multiple of about 19.5, above recent industry transactions and based on an adjusted "economically transferred" EBITDA of €173 million in 2017. The proceeds would allow it to reduce debt faster, giving its businesses, which include chemicals, pharmaceuticals and lab equipment, more flexibility, although it ruled out acquisitions worth more than €500 million this year. P&G also announced it would split up its consumer care joint venture with Teva, PGT Healthcare, on July 1, saying strategies were no longer aligned. PGT accounts for nearly all of P&G’s personal health care sales outside the United States. Teva said the terms of the agreement to terminate the JV with P&G would not be disclosed and that the dissolution was amicable. Merck said the divestment of its consumer health business did not change its goal of keeping net sales of its established prescription drugs, such as Erbitux (for cancer) and Rebif (for multiple sclerosis), organically stable until 2022. It will issue guidance for 2018 to reflect the sale of the consumer healthcare business when it publishes first-quarter financial results on May 15. In India, about 3,300 Merck employees could move to P&G upon completion of the transaction, which is expected by the fourth quarter. As part of the deal, P&G will buy a majority stake in the German company’s Indian consumer health business, Merck Ltd, and subsequently make a mandatory tender offer to minority shareholders. A final agreement with P&G on Merck’s French consumer health business has yet to be worked out with labour representatives, but that will not change the overall price agreed with P&G. JP Morgan acted as financial adviser to Merck on the transaction, and Freshfields Bruckhaus Deringer was legal adviser. $1 = 0.8079 euros. Additional reporting by Shalini Nagarajan in Bengaluru, Maria Sheahan in Frankfurt and Tova Cohen in Tel Aviv; editing by Susan Fenton and Jason Neely.
April nineteen, two thousand eighteen — Procter & Gamble Co. has agreed to acquire Merck KGaA’s consumer health unit for euro three point four billion (dollar four point two billion), gaining vitamin brands such as Seven Seas and greater exposure to Latin American and Asian markets. The maker of Pampers diapers and Gillette razors said the deal would expand its portfolio of consumer healthcare products, which includes Vicks cold relief. The Merck unit also includes vitamin brands Femibion and Neurobion. Healthcare currently accounts for twelve percent of P&G’s group sales. Merck will terminate its consumer care joint venture with Teva in July and plans to update its earnings guidance at its Q one results. The deal follows GlaxoSmithKline’s agreement to buy Novartis out of their consumer healthcare joint venture for dollar thirteen billion after dropping its pursuit of Pfizer’s consumer unit. Pfizer has struggled to divest the business for as much as dollar twenty billion after Reckitt Benckiser dropped out last month and Johnson & Johnson stepped away in January. Prescription-free remedies offer stable sales due to customer brand loyalty, albeit at lower margins than pharmaceuticals, but intense price competition online, mainly from Amazon, and cheaper store-brand products have weighed on profits in the U.S. and other Western markets. U.S.-based P&G derived twelve percent of group sales, or dollar seven point five billion, from health care products last year, including Oral-B toothbrushes and toothpastes. The purchase price for Merck’s business suggests the German company backed down from price demands of as much as euro four billion, which sources told Reuters had deterred initial suitors such as Nestlé, Perrigo and Stada owners Bain and Cinven. Morgan Stanley analyst Vincent Meunier said the price still implied a valuation of four point seven times sales and around nineteen times operating profit (EBITDA) for the business, at the high end of recent deals in the sector. "This will help Merck focus on its pharma unit and refurbish its pipeline," he said. Merck shares were up zero point five percent at zero eight colon thirty three GMT, among the top gainers in the German blue-chip DAX index, having risen one point two percent earlier. Merck said it fetched a multiple of about nineteen point five, above recent industry transactions and based on an adjusted "economically transferred" EBITDA of euro one hundred seventy three million in two thousand seventeen. The proceeds would allow it to reduce debt faster, giving its businesses, which include chemicals, pharmaceuticals and lab equipment, more flexibility, although it ruled out acquisitions worth more than euro five hundred million this year. P&G also announced it would split up its consumer care joint venture with Teva, PGT Healthcare, on July one, saying strategies were no longer aligned. PGT accounts for nearly all of P&G’s personal health care sales outside the United States. Teva said the terms of the agreement to terminate the JV with P&G would not be disclosed and that the dissolution was amicable. Merck said the divestment of its consumer health business did not change its goal of keeping net sales of its established prescription drugs, such as Erbitux (for cancer) and Rebif (for multiple sclerosis), organically stable until two thousand twenty two. It will issue guidance for two thousand eighteen to reflect the sale of the consumer healthcare business when it publishes first-quarter financial results on May fifteen. In India, about three thousand three hundred Merck employees could move to P&G upon completion of the transaction, which is expected by the fourth quarter. As part of the deal, P&G will buy a majority stake in the German company’s Indian consumer health business, Merck Ltd, and subsequently make a mandatory tender offer to minority shareholders. A final agreement with P&G on Merck’s French consumer health business has yet to be worked out with labour representatives, but that will not change the overall price agreed with P&G. JP Morgan acted as financial adviser to Merck on the transaction, and Freshfields Bruckhaus Deringer was legal adviser. one dollar equals zero point eight zero seven nine euros. Additional reporting by Shalini Nagarajan in Bengaluru, Maria Sheahan in Frankfurt and Tova Cohen in Tel Aviv; editing by Susan Fenton and Jason Neely.
long_en_332
poet_en
201
en
After a hole was punched in me and a pretty piece of yarn was tied on, I was put back in the basket with a lot of other tags—except the one on top of me was fuzzy and made my nose itch. Days later, all the other tags and I were put in a big, heavy envelope, taped up, and sent to someplace called New Hampshire (is there an Old Hampshire?). When that package was opened, I saw stacks and stacks of tags spread out on a table. People looked at us and thought we looked great, and we did—my orange was bright and cheerful. We were taken to a big meeting outside, and it was hot. There were lots of people and music. People walked around, looking at all the stuff. I was given to a lady who put me in her big, dark purse. I guess I must have gone to sleep because I don't remember how we got to her house. She took me out and stuck me in her Bible as a bookmark. It was days before she opened it, but one day, when I was lying on the nightstand, I heard Christian music.
After a hole was punched in me and a pretty piece of yarn was tied on, I was put back in the basket with a lot of other tags—except the one on top of me was fuzzy and made my nose itch. Days later, all the other tags and I were put in a big, heavy envelope, taped up, and sent to someplace called New Hampshire (is there an Old Hampshire?). When that package was opened, I saw stacks and stacks of tags spread out on a table. People looked at us and thought we looked great, and we did—my orange was bright and cheerful. We were taken to a big meeting outside, and it was hot. There were lots of people and music. People walked around, looking at all the stuff. I was given to a lady who put me in her big, dark purse. I guess I must have gone to sleep because I don't remember how we got to her house. She took me out and stuck me in her Bible as a bookmark. It was days before she opened it, but one day, when I was lying on the nightstand, I heard Christian music.
long_en_315
wiki_en
120
en
Crumbling, Deana M.; Griffith, Jennifer; Powell, Dan M. (Spring 2003). Improving Decision Quality: Making the Case for Adopting Next-Generation Site Characterization Practices. Remediation. Wiley Periodicals, Inc., 2003. Robbat, Albert Jr. (1997). A Guideline for Dynamic Workplans and Field Analytics: The Keys to Cost-Effective Site Characterization and Cleanup. U.S. Department of Energy (2001). Adaptive Sampling and Analysis Programs (ASAPs). Report DOE/EM-0592. Interstate Technology & Regulatory Council (ITRC) (2003). Triad Technical and Regulatory Guidance. Interstate Technology & Regulatory Council (ITRC) (2007). Triad Implementation Guide. Crumbling, D.M.; Groenjes, C.; Lesnik, B.; Lynch, K.; Shockley, J.; Van Ee, J.; Howe, R.; Keith, L.; McKenna, J. Managing Uncertainty in Environmental Decisions. Environmental Science & Technology, Vol. 35, No. 19, pp. 405A–409A.
Crumbling, Deana M.; Griffith, Jennifer; Powell, Dan M. (Spring two thousand three). Improving Decision Quality: Making the Case for Adopting Next-Generation Site Characterization Practices. Remediation. Wiley Periodicals, Inc., two thousand three. Robbat, Albert Jr. (one thousand nine hundred ninety seven). A Guideline for Dynamic Workplans and Field Analytics: The Keys to Cost-Effective Site Characterization and Cleanup. U.S. Department of Energy (two thousand one). Adaptive Sampling and Analysis Programs (ASAPs). Report DOE/EM-zero five nine two. Interstate Technology & Regulatory Council (ITRC) (two thousand three). Triad Technical and Regulatory Guidance. Interstate Technology & Regulatory Council (ITRC) (two thousand seven). Triad Implementation Guide. Crumbling, D.M.; Groenjes, C.; Lesnik, B.; Lynch, K.; Shockley, J.; Van Ee, J.; Howe, R.; Keith, L.; McKenna, J. Managing Uncertainty in Environmental Decisions. Environmental Science & Technology, Vol. thirty five, No. nineteen, pp. four hundred five A–four hundred nine A.
long_en_253
news_en
696
en
By Kim I. Hartman May 22, 2010 Tampa — Hulk Hogan is back in federal court in Florida, this time over a cartoon commercial. Hogan claims his reputation was harmed in a Cocoa Pebbles ad featuring Bamm-Bamm tossing "Hulk Boulder" into the air and winning the match. Hogan, whose real name is Terry Bollea, is suing the maker of the cereal, Post Foods. In the "Cocoa Smashdown" spot, a cartoon character resembling Hogan easily beats Fred and Barney inside the ring. But then Bamm-Bamm steps in and pounds the blond-haired, mustachioed wrestler to bits. The federal lawsuit states Hogan "is shown humiliated and cracked into pieces with broken teeth, with the closing banner, 'Little Pieces…BIG TASTE!'" The commercial character goes by the name "Hulk Boulder," which Hogan's lawsuit says is a name he used early in his career until wrestling promoter Vince McMahon decided he should have an Irish name. The lawsuit says Post Foods never sought or received Hogan's permission to use his likeness to promote the cereal. Hogan says he raised his objections with Post in August, but the ads continued. The wrestler contends he has been harmed by, among other things, "the unauthorized and degrading depictions in the Cocoa Smashdown advertisements." Hogan has recently been in the news for other matters, and now he is going to the mat against The Flintstones, reports Tampa Bay Online. Hogan, whose real name is Terry Bollea, is suing the maker of Cocoa Pebbles, Post Foods, accusing the company of appropriating his image in commercials for the cereal. In the "Cocoa Smashdown" commercial, a cartoon character resembling Hogan easily beats Fred and Barney inside the ring. But then Bamm-Bamm steps in and pounds the blond-haired, mustachioed wrestler to bits. The federal lawsuit states that "Hulk is shown humiliated and cracked into pieces with broken teeth, with the closing banner, 'Little Pieces... BIG TASTE!'" The commercial character goes by the name "Hulk Boulder," which Hogan's lawsuit says is a name he used early in his career until wrestling promoter Vince McMahon decided he should have an Irish name. The lawsuit says Post Foods never sought or received Hogan's permission to use his likeness to promote the cereal. Hogan says he raised his objections with Post in August, but the ads continued. The wrestler contends he has been harmed by, among other things, "the unauthorized and degrading depictions in the Cocoa Smashdown advertisements." Hogan recently made news when he filed suit in state court — later moved to federal court — against Wells Fargo Insurance for failing to upgrade his coverage as his exposure to risk grew. Hogan blames Wells Fargo and claims he was inadequately insured when his teenage son, Nick Bollea, got into a wreck that grievously injured passenger John Graziano. Hogan settled a lawsuit with the seriously injured victim Graziano in February. Hulk Hogan files suit against Cocoa Pebbles maker Tampa — Hulk Hogan, whose real name is Terry Bollea, is suing the maker of Cocoa Pebbles, accusing the company of appropriating his image in commercials for the cereal. In the "Cocoa Smashdown" commercial, a cartoon character resembling Hogan easily beats Fred and Barney inside the ring, but then Bamm-Bamm steps in and pounds the blond-haired, mustachioed wrestler to bits. Hogan's federal lawsuit states he "is shown humiliated and cracked into pieces with broken teeth, with the closing banner, 'Little Pieces...BIG TASTE!'" The commercial character goes by the name "Hulk Boulder," which the lawsuit says was a name Hogan used early in his career until wrestling promoter Vince McMahon decided he should have an Irish name. The suit says Post Foods never sought or received Hogan's permission to use his likeness to promote the cereal. Hogan says he raised his objections with the Post in August, but the ads continued. He is marketing his own products, including Hogan Energy drink and Hulkster Burgers, a line of microwaveable hamburgers and chicken sandwiches sold at Walmart. The wrestler contends he has been harmed by, among other things, the unauthorized and degrading depictions in the Cocoa Smashdown advertisements. A Post spokesperson could not immediately be reached for comment.
By Kim I. Hartman May twenty two, two thousand ten Tampa — Hulk Hogan is back in federal court in Florida, this time over a cartoon commercial. Hogan claims his reputation was harmed in a Cocoa Pebbles ad featuring Bamm-Bamm tossing "Hulk Boulder" into the air and winning the match. Hogan, whose real name is Terry Bollea, is suing the maker of the cereal, Post Foods. In the "Cocoa Smashdown" spot, a cartoon character resembling Hogan easily beats Fred and Barney inside the ring. But then Bamm-Bamm steps in and pounds the blond-haired, mustachioed wrestler to bits. The federal lawsuit states Hogan "is shown humiliated and cracked into pieces with broken teeth, with the closing banner, 'Little Pieces…BIG TASTE!'" The commercial character goes by the name "Hulk Boulder," which Hogan's lawsuit says is a name he used early in his career until wrestling promoter Vince McMahon decided he should have an Irish name. The lawsuit says Post Foods never sought or received Hogan's permission to use his likeness to promote the cereal. Hogan says he raised his objections with Post in August, but the ads continued. The wrestler contends he has been harmed by, among other things, "the unauthorized and degrading depictions in the Cocoa Smashdown advertisements." Hogan has recently been in the news for other matters, and now he is going to the mat against The Flintstones, reports Tampa Bay Online. Hogan, whose real name is Terry Bollea, is suing the maker of Cocoa Pebbles, Post Foods, accusing the company of appropriating his image in commercials for the cereal. In the "Cocoa Smashdown" commercial, a cartoon character resembling Hogan easily beats Fred and Barney inside the ring. But then Bamm-Bamm steps in and pounds the blond-haired, mustachioed wrestler to bits. The federal lawsuit states that "Hulk is shown humiliated and cracked into pieces with broken teeth, with the closing banner, 'Little Pieces... BIG TASTE!'" The commercial character goes by the name "Hulk Boulder," which Hogan's lawsuit says is a name he used early in his career until wrestling promoter Vince McMahon decided he should have an Irish name. The lawsuit says Post Foods never sought or received Hogan's permission to use his likeness to promote the cereal. Hogan says he raised his objections with Post in August, but the ads continued. The wrestler contends he has been harmed by, among other things, "the unauthorized and degrading depictions in the Cocoa Smashdown advertisements." Hogan recently made news when he filed suit in state court — later moved to federal court — against Wells Fargo Insurance for failing to upgrade his coverage as his exposure to risk grew. Hogan blames Wells Fargo and claims he was inadequately insured when his teenage son, Nick Bollea, got into a wreck that grievously injured passenger John Graziano. Hogan settled a lawsuit with the seriously injured victim Graziano in February. Hulk Hogan files suit against Cocoa Pebbles maker Tampa — Hulk Hogan, whose real name is Terry Bollea, is suing the maker of Cocoa Pebbles, accusing the company of appropriating his image in commercials for the cereal. In the "Cocoa Smashdown" commercial, a cartoon character resembling Hogan easily beats Fred and Barney inside the ring, but then Bamm-Bamm steps in and pounds the blond-haired, mustachioed wrestler to bits. Hogan's federal lawsuit states he "is shown humiliated and cracked into pieces with broken teeth, with the closing banner, 'Little Pieces...BIG TASTE!'" The commercial character goes by the name "Hulk Boulder," which the lawsuit says was a name Hogan used early in his career until wrestling promoter Vince McMahon decided he should have an Irish name. The suit says Post Foods never sought or received Hogan's permission to use his likeness to promote the cereal. Hogan says he raised his objections with the Post in August, but the ads continued. He is marketing his own products, including Hogan Energy drink and Hulkster Burgers, a line of microwaveable hamburgers and chicken sandwiches sold at Walmart. The wrestler contends he has been harmed by, among other things, the unauthorized and degrading depictions in the Cocoa Smashdown advertisements. A Post spokesperson could not immediately be reached for comment.
long_en_347
poet_en
124
en
At least not until someone else in the family gets a job working for them. I thought this would be the perfect little getaway: a hotel near the beach where we could relax, go to the beach, swim in the pool, and have dinner out. Most importantly, I wouldn't have to cook or clean. Then again, I sometimes forget that I still have three small people who need lots of attention. Damn them! I was looking to relax. I also forget that, to take these trips, I have to spend an entire day packing and getting everyone ready, then an entire day unpacking and cleaning up afterward. Is it worth two days without cooking and cleaning? Maybe, but I'm still not sure.
At least not until someone else in the family gets a job working for them. I thought this would be the perfect little getaway: a hotel near the beach where we could relax, go to the beach, swim in the pool, and have dinner out. Most importantly, I wouldn't have to cook or clean. Then again, I sometimes forget that I still have three small people who need lots of attention. Damn them! I was looking to relax. I also forget that, to take these trips, I have to spend an entire day packing and getting everyone ready, then an entire day unpacking and cleaning up afterward. Is it worth two days without cooking and cleaning? Maybe, but I'm still not sure.
long_en_247
news_en
840
en
Donna Finegan-White, 44, a mother of two from Swindon, underwent a double mastectomy in 2014 because breast cancer runs in her family. When she woke from the operation at The Great Western Hospital she discovered she had been given breast implants she had not requested, and they were much larger than her original breasts. Ms Finegan-White received a £10,000 payout and, after waiting two years, had the implants removed at another hospital in 2016; she later required further surgery when a blood clot developed after the removal. She says doctors apologised after failing to spot two inoperable tumours and in March was told she has a rare terminal cancer of the trachea and was given about two years to live; she is seeking legal action again. The Great Western Hospital has been contacted for comment and denies any wrongdoing. Donna Finegan-White said the surgeons made the mistake with her not once but twice. She underwent a double mastectomy in October 2014 to reduce her risk of developing breast cancer after both her late mother, Carol Manola, and her aunt were diagnosed with the disease. She had requested temporary expanders, which stretch the skin and pectoral muscle, and never agreed to implants; instead she says permanent sub-pectoral breast implants were inserted without her consent. Finegan-White, who said she had been undecided about reconstruction and had only signed for removal, complained that the surgeons cut corners to save time and money and that the treatment left her severely affected. She sought medical help several times for pain and swelling and suffered a significant psychological reaction, and ultimately underwent corrective surgery to remove the implants at a different hospital in Oxford on 23 February 2016. She has a son, Dominic, 25, and a daughter, Shakira, 23. After being discharged that evening, she felt a pain in the right side of her body and was rushed back to the hospital for another life-saving operation after doctors discovered a blood clot. The Great Western Hospital in Swindon had given her breast implant surgery without her consent. After receiving her payout, in March this year she was told she had a rare cancer of the trachea, which she believes is linked to tumours missed by doctors. She is now planning to sue The Great Western Hospital Trust again, blaming them for missing the cancer on CT scans she had following her initial surgery in 2014. She said, "They came over to my bedside when I was in hospital and said, 'Sorry Donna, we've looked back at your CT scans from 2014 and we didn't spot your tumours.' My friend was sat next to me when they told me, so I've got a witness to prove it." Describing the incident with her breasts, she said, "I expected to come out of the surgery risk free of breast cancer and without permanent implants, as this was what had been agreed upon. Yet I woke up with implants, which I had never signed for in the consultations with the surgeon. At this point I felt shocked, very upset and frustrated, as it was completely unexpected. We put a great deal of trust in medical staff, relying on them for expert care. I just feel totally let down and angry by the care I received." Ms Finegan-White said: "I suffered a great deal of pain and trauma for months because of the implants that I never signed up for. By speaking out I hope my story reminds hospitals of the importance of upholding patient consent." She has called on hospital trusts to ensure they honour the wishes of patients and has instructed specialist medical negligence lawyers at Irwin Mitchell to investigate the level of care she received from Great Western Hospitals NHS Foundation Trust. The trust denied it had acted negligently but admitted the issue of whether Donna consented to the implants fell below "reasonable standards" and the matter was settled out of court. A Great Western Hospital spokesman told MailOnline: "We were given consent by the patient for implants in 2014, we did not act negligently and the case was settled in 2016. We have clear and effective consent processes that are followed across the Trust, which outline the risks that patients may face during and after any surgery, including the common risk of blood clots. As part of our continued improvement in services we are further strengthening our consent processes." James Pink, expert medical negligence lawyer at Irwin Mitchell representing Donna, said: "Donna had undergone counselling at the request of the hospital trust to prepare psychologically for a double mastectomy and was incredibly shocked and upset to come out of surgery with implants she had not consented to." This was already a distressing time, and this just compounded her problems. The NHS had shown care and compassion in helping Donna prepare for her surgery. However, an operation that was intended to reduce Donna's risk of developing breast cancer ended up causing her unnecessary distress and suffering.
Donna Finegan-White, forty four, a mother of two from Swindon, underwent a double mastectomy in two thousand fourteen because breast cancer runs in her family. When she woke from the operation at The Great Western Hospital she discovered she had been given breast implants she had not requested, and they were much larger than her original breasts. Ms Finegan-White received a ten thousand pounds payout and, after waiting two years, had the implants removed at another hospital in two thousand sixteen; she later required further surgery when a blood clot developed after the removal. She says doctors apologised after failing to spot two inoperable tumours and in March was told she has a rare terminal cancer of the trachea and was given about two years to live; she is seeking legal action again. The Great Western Hospital has been contacted for comment and denies any wrongdoing. Donna Finegan-White said the surgeons made the mistake with her not once but twice. She underwent a double mastectomy in October two thousand fourteen to reduce her risk of developing breast cancer after both her late mother, Carol Manola, and her aunt were diagnosed with the disease. She had requested temporary expanders, which stretch the skin and pectoral muscle, and never agreed to implants; instead she says permanent sub-pectoral breast implants were inserted without her consent. Finegan-White, who said she had been undecided about reconstruction and had only signed for removal, complained that the surgeons cut corners to save time and money and that the treatment left her severely affected. She sought medical help several times for pain and swelling and suffered a significant psychological reaction, and ultimately underwent corrective surgery to remove the implants at a different hospital in Oxford on twenty three February two thousand sixteen. She has a son, Dominic, twenty five, and a daughter, Shakira, twenty three. After being discharged that evening, she felt a pain in the right side of her body and was rushed back to the hospital for another life-saving operation after doctors discovered a blood clot. The Great Western Hospital in Swindon had given her breast implant surgery without her consent. After receiving her payout, in March this year she was told she had a rare cancer of the trachea, which she believes is linked to tumours missed by doctors. She is now planning to sue The Great Western Hospital Trust again, blaming them for missing the cancer on CT scans she had following her initial surgery in two thousand fourteen. She said, "They came over to my bedside when I was in hospital and said, 'Sorry Donna, we've looked back at your C T scans from two thousand fourteen and we didn't spot your tumours.' My friend was sat next to me when they told me, so I've got a witness to prove it." Describing the incident with her breasts, she said, "I expected to come out of the surgery risk free of breast cancer and without permanent implants, as this was what had been agreed upon. Yet I woke up with implants, which I had never signed for in the consultations with the surgeon. At this point I felt shocked, very upset and frustrated, as it was completely unexpected. We put a great deal of trust in medical staff, relying on them for expert care. I just feel totally let down and angry by the care I received." Ms Finegan-White said: "I suffered a great deal of pain and trauma for months because of the implants that I never signed up for. By speaking out I hope my story reminds hospitals of the importance of upholding patient consent." She has called on hospital trusts to ensure they honour the wishes of patients and has instructed specialist medical negligence lawyers at Irwin Mitchell to investigate the level of care she received from Great Western Hospitals NHS Foundation Trust. The trust denied it had acted negligently but admitted the issue of whether Donna consented to the implants fell below "reasonable standards" and the matter was settled out of court. A Great Western Hospital spokesman told MailOnline: "We were given consent by the patient for implants in two thousand fourteen, we did not act negligently and the case was settled in two thousand sixteen. We have clear and effective consent processes that are followed across the Trust, which outline the risks that patients may face during and after any surgery, including the common risk of blood clots. As part of our continued improvement in services we are further strengthening our consent processes." James Pink, expert medical negligence lawyer at Irwin Mitchell representing Donna, said: "Donna had undergone counselling at the request of the hospital trust to prepare psychologically for a double mastectomy and was incredibly shocked and upset to come out of surgery with implants she had not consented to." This was already a distressing time, and this just compounded her problems. The NHS had shown care and compassion in helping Donna prepare for her surgery. However, an operation that was intended to reduce Donna's risk of developing breast cancer ended up causing her unnecessary distress and suffering.
long_en_205
news_en
705
en
March 9, 2018 By Sonali Paul and Clara Denina MELBOURNE/LONDON (Reuters) — At least three bidders are expected to submit final offers for Rio Tinto’s Hail Creek and Kestrel coal mines in Australia, which could fetch up to $2.5 billion, people familiar with the process said. The Anglo-Australian mining company made a strategic decision in 2017 to exit coal and focus on growth in iron ore, copper and aluminum. Hail Creek and Kestrel are Rio Tinto’s last two coal mines, following the $2.7 billion sale of its Hunter Valley coal operations in Australia to Yancoal last year. Australia’s Whitehaven Coal is expected to bid, as well as Australian private equity firm EMR Capital and Indonesia’s Adaro Energy. A consortium led by U.S. private equity firm Apollo Global Management is also expected to be in the running. Final bids for the two mines, which mostly produce coking coal used in steel mills, are due on Monday, March 12. All sources declined to be named as the bids were subject to confidentiality agreements. The sale is eagerly awaited by investors, who are hungry for more cash returns after a bumper payout for 2017, as the company is no longer looking to cut debt and has no plans for any big new investments. “If Rio were to sell these assets, the likely outcome for the use of proceeds would be to direct them to shareholders,” UBS analysts said in a note this week, adding that the mines could hand back more than $9 billion over the next 12 months. EMR Capital has lined up Indonesia’s second biggest coal producer Adaro as a partner on the bid, after talks with Chinese wealth fund CIC fell through, according to two people close to the process. Adaro did not respond to telephone calls and written requests for comment. EMR Managing Director Jason Chang declined to comment on whether it is bidding. “Our themes haven’t changed. We’re still looking for coking coal, potash and copper assets,” Chang said. Apollo Global Management is bidding with pension fund Canada Pension Plan (CPP), U.S. coal company Xcoal Energy & Resources, and a former Glencore executive for the assets. Apollo and CPP declined to comment. Xcoal Resources was not immediately available to comment. Whitehaven declined to comment, but is seen in a position to make acquisitions for the first time in several years, with its gearing slashed to just 4 percent. However, for a A$4.3 billion ($3.4 billion) company, Hail Creek and Kestrel would be a huge bite, and analysts expect it would need to either sell new shares to help fund a deal or line up a partner in the mines. UBS speculated that Mitsui, which co-owns the Kestrel mine, "may have a desire to increase its stake." Mitsui Australia’s spokesman declined to comment. China-backed Yancoal Australia (YAL.AX) looked at the assets, but as of late Friday was no longer in the race, according to a lending source. Yancoal declined to comment. Rio’s partners in Hail Creek include units of Nippon Steel & Sumitomo Metal Corp (5401.T), Marubeni Corp (8002.T), and Sumitomo Corp (8053.T), while Kestrel is minority-owned by Japan’s Mitsui & Co (8031.T). The final price Rio gets will hinge on bidders’ outlook for coking coal prices. UBS values the two mines at $1.94 billion, based on a long-term price of $120 a ton for hard coking coal, while Macquarie values them at $2.7 billion based on $125 a ton. Those coking coal prices are well below current levels of around $209 a ton, supported by factors including capacity curbs imposed by Australia’s top coal hauler, congestion at one of the country’s key ports, and problems at some mines. "Rio Tinto is in a position where they can call whatever price they want," said one person close to the process. Rio, which is being advised by Credit Suisse, declined to comment on the sale. Reporting by Clara Denina and Sonali Paul; additional reporting by Fergus Jensen in Jakarta, Sharon Klyne in Sydney, Kane Wu in Hong Kong and Yuka Obayashi in Tokyo; editing by David Evans.
March ninth, two thousand eighteen By Sonali Paul and Clara Denina MELBOURNE/LONDON (Reuters) — At least three bidders are expected to submit final offers for Rio Tinto’s Hail Creek and Kestrel coal mines in Australia, which could fetch up to two point five billion dollars, people familiar with the process said. The Anglo-Australian mining company made a strategic decision in two thousand seventeen to exit coal and focus on growth in iron ore, copper and aluminum. Hail Creek and Kestrel are Rio Tinto’s last two coal mines, following the two point seven billion dollars sale of its Hunter Valley coal operations in Australia to Yancoal last year. Australia’s Whitehaven Coal is expected to bid, as well as Australian private equity firm EMR Capital and Indonesia’s Adaro Energy. A consortium led by U.S. private equity firm Apollo Global Management is also expected to be in the running. Final bids for the two mines, which mostly produce coking coal used in steel mills, are due on Monday, March twelfth. All sources declined to be named as the bids were subject to confidentiality agreements. The sale is eagerly awaited by investors, who are hungry for more cash returns after a bumper payout for two thousand seventeen, as the company is no longer looking to cut debt and has no plans for any big new investments. “If Rio were to sell these assets, the likely outcome for the use of proceeds would be to direct them to shareholders,” UBS analysts said in a note this week, adding that the mines could hand back more than nine billion dollars over the next twelve months. EMR Capital has lined up Indonesia’s second biggest coal producer Adaro as a partner on the bid, after talks with Chinese wealth fund CIC fell through, according to two people close to the process. Adaro did not respond to telephone calls and written requests for comment. EMR Managing Director Jason Chang declined to comment on whether it is bidding. “Our themes haven’t changed. We’re still looking for coking coal, potash and copper assets,” Chang said. Apollo Global Management is bidding with pension fund Canada Pension Plan (CPP), U.S. coal company Xcoal Energy & Resources, and a former Glencore executive for the assets. Apollo and CPP declined to comment. Xcoal Resources was not immediately available to comment. Whitehaven declined to comment, but is seen in a position to make acquisitions for the first time in several years, with its gearing slashed to just four percent. However, for a A dollar four point three billion (dollar three point four billion) company, Hail Creek and Kestrel would be a huge bite, and analysts expect it would need to either sell new shares to help fund a deal or line up a partner in the mines. UBS speculated that Mitsui, which co-owns the Kestrel mine, "may have a desire to increase its stake." Mitsui Australia’s spokesman declined to comment. China-backed Yancoal Australia (YAL.AX) looked at the assets, but as of late Friday was no longer in the race, according to a lending source. Yancoal declined to comment. Rio’s partners in Hail Creek include units of Nippon Steel and Sumitomo Metal Corp (five four zero one dot T), Marubeni Corp (eight zero zero two dot T), and Sumitomo Corp (eight zero five three dot T), while Kestrel is minority-owned by Japan’s Mitsui and Co (eight zero three one dot T). The final price Rio gets will hinge on bidders’ outlook for coking coal prices. UBS values the two mines at dollar one point nine four billion, based on a long-term price of dollar one hundred twenty a ton for hard coking coal, while Macquarie values them at dollar two point seven billion based on dollar one hundred twenty five a ton. Those coking coal prices are well below current levels of around dollar two hundred nine a ton, supported by factors including capacity curbs imposed by Australia’s top coal hauler, congestion at one of the country’s key ports, and problems at some mines. "Rio Tinto is in a position where they can call whatever price they want," said one person close to the process. Rio, which is being advised by Credit Suisse, declined to comment on the sale. Reporting by Clara Denina and Sonali Paul; additional reporting by Fergus Jensen in Jakarta, Sharon Klyne in Sydney, Kane Wu in Hong Kong and Yuka Obayashi in Tokyo; editing by David Evans.
long_en_265
wiki_en
882
en
Engineering controls for nanomaterials are hazard-control methods and equipment that isolate workers from exposure. They are physical changes to the workplace and, after systems and facilities are designed, are the most important methods for controlling nanomaterial health and safety risks. The primary hazard is inhalation of aerosols containing nanoparticles. Many engineering controls developed for other industries can be used or adapted to protect workers, including ventilation and filtration using laboratory fixtures such as fume hoods, containment using gloveboxes, and non-ventilation measures such as sticky mats. Research is ongoing to identify which engineering controls are most effective for nanomaterials. Controlling exposures to occupational hazards is the fundamental way to protect workers. A hierarchy of controls guides implementation of feasible and effective measures: elimination, substitution, engineering controls, administrative controls, and personal protective equipment. Methods earlier in the list are generally more effective at reducing hazard-related risk. Process changes and engineering controls are recommended as the primary means to reduce exposures, with personal protective equipment as a last resort. Following this hierarchy promotes inherently safer systems, where the risk of illness or injury is substantially reduced. Engineering controls are physical changes to the workplace that isolate workers from hazards—by enclosing them or by removing contaminated air through ventilation and filtration. Well-designed engineering controls are typically passive, functioning independently of worker interactions, which reduces the potential for behavior to affect exposure levels. They also ideally do not interfere with productivity or ease of processing, because operators may be motivated to circumvent controls that hinder work. Initial costs for engineering controls can be higher than those for administrative controls or personal protective equipment, but long-term operating costs are often lower and can sometimes provide cost savings elsewhere in the process. Nanomaterials have at least one primary dimension of less than 100 nanometers and often exhibit properties different from those of their bulk components that are technologically useful. Because nanotechnology is recent, the health and safety effects of nanomaterial exposures and acceptable exposure levels are not yet fully understood. Processing and manufacturing of nanomaterials involve a wide range of hazards. The optimal engineering controls for a given situation are influenced by the quantity and dustiness of the material and the duration of the task. Stronger controls are required if dry nanomaterials cannot be substituted with a suspension, or if procedures such as sonication or cutting of a solid matrix containing nanomaterials cannot be eliminated. As with any new technology, early exposures are likely among researchers in laboratories and pilot plants. Researchers handling engineered nanomaterials should work in ways that protect their safety and health. Control measures for nanoparticles, dust, and other hazards are most effective when implemented as part of a comprehensive occupational safety and health management system. Critical elements include management commitment and employee involvement, worksite analysis, hazard prevention and control, and adequate training for employees, supervisors, and managers. Ventilation systems are classified as local or general. Local exhaust ventilation operates at or near the source of contamination, often in conjunction with an enclosure. In contrast, general exhaust ventilation treats an entire room through a building's HVAC system. Local exhaust ventilation (LEV) applies an exhaust system at or near the contamination source. If properly designed, LEV is more efficient than dilution ventilation at removing contaminants, requiring lower exhaust volumes, less make-up air, and often lower costs. By capturing contaminants at the source, LEV prevents them from entering the general work environment. Examples include fume hoods, vented balance enclosures, and biosafety cabinets. Exhaust hoods without an enclosure are less preferable, and laminar flow hoods are not recommended because they direct air outward toward the worker. In a 2006 international survey of nanotechnology firms and research laboratories that manufactured, handled, researched, or used nanomaterials, all participating organizations reported using some type of engineering control. The most common control was the traditional laboratory fume hood, used by two-thirds of the firms. Fume hoods are recommended to have an average inward face velocity of 80–100 feet per minute (fpm). For higher-toxicity materials, a face velocity of 100–120 fpm is recommended to provide better protection; velocities exceeding 150 fpm are not believed to improve performance and may increase hood leakage. New fume hoods designed for nanotechnology are based on low-turbulence balance enclosures, originally developed for weighing pharmaceutical powders; these enclosures provide adequate containment at lower face velocities (typically 65–85 fpm) and are useful for operations that disturb and aerosolize nanomaterials. Air exiting a fume hood should pass through a HEPA filter and be exhausted outside; used filters should be handled as hazardous waste. Turbulence can cause nanomaterials to exit the front of the hood; avoid this by keeping the sash in the proper position, keeping the interior uncluttered, and avoiding fast movements while working. High face velocities can result in loss of powdered nanomaterials. As of 2012, there was little research on the effectiveness of low-flow fume hoods, but evidence indicated that air curtain hoods effectively contain nanoparticles. Other enclosures: Biosafety cabinets are designed to contain bioaerosols, which are similar in size to engineered nanoparticles, and are therefore believed to be effective for nanoparticles. However, common biosafety cabinets are more prone to turbulence. As with fume hoods, they should be exhausted to the outside of the facility.
Engineering controls for nanomaterials are hazard-control methods and equipment that isolate workers from exposure. They are physical changes to the workplace and, after systems and facilities are designed, are the most important methods for controlling nanomaterial health and safety risks. The primary hazard is inhalation of aerosols containing nanoparticles. Many engineering controls developed for other industries can be used or adapted to protect workers, including ventilation and filtration using laboratory fixtures such as fume hoods, containment using gloveboxes, and non-ventilation measures such as sticky mats. Research is ongoing to identify which engineering controls are most effective for nanomaterials. Controlling exposures to occupational hazards is the fundamental way to protect workers. A hierarchy of controls guides implementation of feasible and effective measures: elimination, substitution, engineering controls, administrative controls, and personal protective equipment. Methods earlier in the list are generally more effective at reducing hazard-related risk. Process changes and engineering controls are recommended as the primary means to reduce exposures, with personal protective equipment as a last resort. Following this hierarchy promotes inherently safer systems, where the risk of illness or injury is substantially reduced. Engineering controls are physical changes to the workplace that isolate workers from hazards—by enclosing them or by removing contaminated air through ventilation and filtration. Well-designed engineering controls are typically passive, functioning independently of worker interactions, which reduces the potential for behavior to affect exposure levels. They also ideally do not interfere with productivity or ease of processing, because operators may be motivated to circumvent controls that hinder work. Initial costs for engineering controls can be higher than those for administrative controls or personal protective equipment, but long-term operating costs are often lower and can sometimes provide cost savings elsewhere in the process. Nanomaterials have at least one primary dimension of less than one hundred nanometers and often exhibit properties different from those of their bulk components that are technologically useful. Because nanotechnology is recent, the health and safety effects of nanomaterial exposures and acceptable exposure levels are not yet fully understood. Processing and manufacturing of nanomaterials involve a wide range of hazards. The optimal engineering controls for a given situation are influenced by the quantity and dustiness of the material and the duration of the task. Stronger controls are required if dry nanomaterials cannot be substituted with a suspension, or if procedures such as sonication or cutting of a solid matrix containing nanomaterials cannot be eliminated. As with any new technology, early exposures are likely among researchers in laboratories and pilot plants. Researchers handling engineered nanomaterials should work in ways that protect their safety and health. Control measures for nanoparticles, dust, and other hazards are most effective when implemented as part of a comprehensive occupational safety and health management system. Critical elements include management commitment and employee involvement, worksite analysis, hazard prevention and control, and adequate training for employees, supervisors, and managers. Ventilation systems are classified as local or general. Local exhaust ventilation operates at or near the source of contamination, often in conjunction with an enclosure. In contrast, general exhaust ventilation treats an entire room through a building's HVAC system. Local exhaust ventilation (LEV) applies an exhaust system at or near the contamination source. If properly designed, LEV is more efficient than dilution ventilation at removing contaminants, requiring lower exhaust volumes, less make-up air, and often lower costs. By capturing contaminants at the source, LEV prevents them from entering the general work environment. Examples include fume hoods, vented balance enclosures, and biosafety cabinets. Exhaust hoods without an enclosure are less preferable, and laminar flow hoods are not recommended because they direct air outward toward the worker. In a two thousand six international survey of nanotechnology firms and research laboratories that manufactured, handled, researched, or used nanomaterials, all participating organizations reported using some type of engineering control. The most common control was the traditional laboratory fume hood, used by two-thirds of the firms. Fume hoods are recommended to have an average inward face velocity of eighty to one hundred feet per minute (fpm). For higher-toxicity materials, a face velocity of one hundred to one hundred twenty fpm is recommended to provide better protection; velocities exceeding one hundred fifty fpm are not believed to improve performance and may increase hood leakage. New fume hoods designed for nanotechnology are based on low-turbulence balance enclosures, originally developed for weighing pharmaceutical powders; these enclosures provide adequate containment at lower face velocities (typically sixty five to eighty five fpm) and are useful for operations that disturb and aerosolize nanomaterials. Air exiting a fume hood should pass through a HEPA filter and be exhausted outside; used filters should be handled as hazardous waste. Turbulence can cause nanomaterials to exit the front of the hood; avoid this by keeping the sash in the proper position, keeping the interior uncluttered, and avoiding fast movements while working. High face velocities can result in loss of powdered nanomaterials. As of two thousand twelve, there was little research on the effectiveness of low-flow fume hoods, but evidence indicated that air curtain hoods effectively contain nanoparticles. Other enclosures: Biosafety cabinets are designed to contain bioaerosols, which are similar in size to engineered nanoparticles, and are therefore believed to be effective for nanoparticles. However, common biosafety cabinets are more prone to turbulence. As with fume hoods, they should be exhausted to the outside of the facility.
long_en_200
news_en
854
en
Businesses all over the country are trying to be the first to open, but when it comes to actually being open on Thanksgiving, the owner of the Sears Hometown store in Plymouth is simply saying, "Enough is enough." So what did she think when Sears corporate told her she had to be open on Thanksgiving Day? "I grew angry," she said. "I won't lie — I grew angry." She said she would make what she called an easy call: "We are not going to allow corporate retailers to affect our family values. The doors of this business will be closed on Thanksgiving, even if it means ruffling a few feathers. Even for my employees, I value them enough that I would not want them to work on a day that I believe is meant to be spent with family." The deals will still be the same, and the customers we spoke with said they are fine with the decision. "I think it is great. I would not want to go shopping on Thanksgiving." She is also calling for a law to allow franchise owners the flexibility to close on holidays as they see fit, and as a mother she sees this as a teaching moment as well. Family values are what this is about, and she says our country is in the state it is in because family values are under threat. So far, the company has not responded to her detailed letter. A quick Granite State poll on holiday shopping found that only nine percent approve. While businesses across the country are announcing plans to open for shopping on Thanksgiving, the owner of a Sears Hometown Store in Plymouth says enough is enough. Franchise owner Holly Cassiano said she grew angry when Sears' corporate office told her she had to be open on Thursday and made a quick decision: her store will remain closed on Thanksgiving. "We are not going to let corporate retailers rule over our family values and take this away from us," she said. "I value my employees enough that I wouldn't have them work on a day that's meant to be spent with family." Cassiano said the store will open at 6 a.m. on Black Friday and offer the same deals. Customers told News 9 they support the decision. "I think that's great," said Jane Marrer of Plymouth. "I wouldn't want to go shopping on Thanksgiving, really." "If she wants to be closed on Thanksgiving, she should be able to do that," said Rob Conkling of Plymouth. Cassiano is also starting a petition calling on lawmakers in Concord to pass a law allowing franchise owners flexibility to close on holidays as they see fit. When Holly Cassiano received a memo saying she was required to open the Sears Hometown Store she owns at 7 p.m. on Thanksgiving Day, she said she knew she had to take a stand. A mother, she called the situation a teaching moment: "Family values is really what it boils down to, and I feel like the country is in the state that it's in because family values comes last," she said. "I was furious, to be honest. It's just against everything I believe in." She said the holiday should not be ignored: "For us to just bypass this and say that the dollar figure is more important than us being with our families, it's unacceptable." Cassiano said she sent a letter to Sears stating her intention to ignore the required hours and remain closed on Thanksgiving; the company had not responded. A Granite State Poll by the University of New Hampshire Survey Center shows a majority of New Hampshire residents disapprove of stores opening on Thanksgiving: 9% approved, 54% disapproved, and 37% said they didn't care either way. The poll's margin of sampling error is ±4.2 percentage points. "They are coming back at us with, 'We're going to take away your bonus for the rest of the year. This is a breach of contract,'" Cassiano said. Cassiano, who has owned the franchise store in Plymouth, New Hampshire, since 2009, will follow the rules and open on Black Friday at 6 a.m. "I'm just a little peon. We don't make a million dollars like they do," he said. In a statement, Sears defended its decision. "We have encouraged all of our dealers and franchisees to be open on Thanksgiving evening because we believe that is what many consumers want," said David Buckley, chief marketing officer of Sears Hometown and Outlet Stores, Inc. Customers at the store Wednesday said they supported Cassiano's stance, and he has even received flowers from people who wanted to show their support. "There are a lot of people who have young kids," said Linda Carmichael of Bridgewater. "They don't need to work even though they need the money; they need to be at home with their family." Holly's contract with Sears Hometown Stores is up in a month. If it gets taken away, she says she's prepared to go to court.
Businesses all over the country are trying to be the first to open, but when it comes to actually being open on Thanksgiving, the owner of the Sears Hometown store in Plymouth is simply saying, "Enough is enough." So what did she think when Sears corporate told her she had to be open on Thanksgiving Day? "I grew angry," she said. "I won't lie — I grew angry." She said she would make what she called an easy call: "We are not going to allow corporate retailers to affect our family values. The doors of this business will be closed on Thanksgiving, even if it means ruffling a few feathers. Even for my employees, I value them enough that I would not want them to work on a day that I believe is meant to be spent with family." The deals will still be the same, and the customers we spoke with said they are fine with the decision. "I think it is great. I would not want to go shopping on Thanksgiving." She is also calling for a law to allow franchise owners the flexibility to close on holidays as they see fit, and as a mother she sees this as a teaching moment as well. Family values are what this is about, and she says our country is in the state it is in because family values are under threat. So far, the company has not responded to her detailed letter. A quick Granite State poll on holiday shopping found that only nine percent approve. While businesses across the country are announcing plans to open for shopping on Thanksgiving, the owner of a Sears Hometown Store in Plymouth says enough is enough. Franchise owner Holly Cassiano said she grew angry when Sears' corporate office told her she had to be open on Thursday and made a quick decision: her store will remain closed on Thanksgiving. "We are not going to let corporate retailers rule over our family values and take this away from us," she said. "I value my employees enough that I wouldn't have them work on a day that's meant to be spent with family." Cassiano said the store will open at six a.m. on Black Friday and offer the same deals. Customers told News 9 they support the decision. "I think that's great," said Jane Marrer of Plymouth. "I wouldn't want to go shopping on Thanksgiving, really." "If she wants to be closed on Thanksgiving, she should be able to do that," said Rob Conkling of Plymouth. Cassiano is also starting a petition calling on lawmakers in Concord to pass a law allowing franchise owners flexibility to close on holidays as they see fit. When Holly Cassiano received a memo saying she was required to open the Sears Hometown Store she owns at seven p.m. on Thanksgiving Day, she said she knew she had to take a stand. A mother, she called the situation a teaching moment: "Family values is really what it boils down to, and I feel like the country is in the state that it's in because family values comes last," she said. "I was furious, to be honest. It's just against everything I believe in." She said the holiday should not be ignored: "For us to just bypass this and say that the dollar figure is more important than us being with our families, it's unacceptable." Cassiano said she sent a letter to Sears stating her intention to ignore the required hours and remain closed on Thanksgiving; the company had not responded. A Granite State Poll by the University of New Hampshire Survey Center shows a majority of New Hampshire residents disapprove of stores opening on Thanksgiving: nine percent approved, fifty-four percent disapproved, and thirty-seven percent said they didn't care either way. The poll's margin of sampling error is plus or minus four point two percentage points. "They are coming back at us with, 'We're going to take away your bonus for the rest of the year. This is a breach of contract,'" Cassiano said. Cassiano, who has owned the franchise store in Plymouth, New Hampshire, since two thousand nine, will follow the rules and open on Black Friday at six a m. "I'm just a little peon. We don't make a million dollars like they do," he said. In a statement, Sears defended its decision. "We have encouraged all of our dealers and franchisees to be open on Thanksgiving evening because we believe that is what many consumers want," said David Buckley, chief marketing officer of Sears Hometown and Outlet Stores, Inc. Customers at the store Wednesday said they supported Cassiano's stance, and he has even received flowers from people who wanted to show their support. "There are a lot of people who have young kids," said Linda Carmichael of Bridgewater. "They don't need to work even though they need the money; they need to be at home with their family." Holly's contract with Sears Hometown Stores is up in a month. If it gets taken away, she says she's prepared to go to court.
long_en_264
wiki_en
842
en
The Edge of Evolution: The Search for the Limits of Darwinism is a 2007 intelligent-design book by Discovery Institute fellow Michael Behe, published by The Free Press. Behe argues that while evolution can produce changes within species, there is a limit to evolution's ability to generate diversity; this limit, which he calls the "edge of evolution," lies somewhere between the species and order levels. On this basis he argues that known evolutionary mechanisms cannot account for all observed diversification since the last universal ancestor and that intervention by an intelligent designer could adequately explain much of life's diversity. It is Behe's second intelligent-design book; his first was Darwin's Black Box. While the book was well received by creationists and some non-biologists, many scientists—particularly biologists—have been highly critical of Behe's methods, evidence, and conclusions. Behe begins by observing that the theory of evolution comprises three related ideas—common descent, natural selection, and random mutation. He says he accepts common descent and natural selection without question but questions the scope and power of random mutation to produce beneficial changes that lead to novel, useful structures and processes. He uses the term "Darwinian evolution" to denote evolution that relies on all three of the factors and labels scientists who regard it as the only form of evolution "Darwinists"; he says they take exception to intelligent design and to other theistic and non-theistic complexity theories. Behe's central claim is that Darwinian evolution exists but is better at modifying existing metabolic pathways (or "molecular machinery") than creating new ones, and therefore plays only a limited role in the development and diversification of life. He examines genetic changes in the Plasmodium (malaria parasite) and human genomes as they respond to each other's defenses, describing the situation as "trench warfare, not an arms race." He contrasts this hemoglobin-destroying, protein-pump-compromising "war by attrition" with the "creative process" required to develop complex structures such as the bacterial flagellum and highly complex systems like the immune system. Behe calculates the "edge of evolution"—the point at which Darwinian evolution ceases to be an effective agent of creative biological change—by considering the number of mutations needed to move from one genetic state to another and the population size of the organism. He concludes that purposeful design plays a major role in developing biological complexity by producing non-random mutations that are then sculpted by natural selection. Design favoring intelligent life, he argues, is supported not only by recent findings about biological complexity but also by discoveries in chemistry (for example, the life‑supporting structure of water) and in cosmology (the anthropic principle). He strongly defends common descent for all life on Earth, including a common ancestor for humans and chimpanzees, and says the evidence is so overwhelming it should be obvious, even "trivial." Behe contends that the mutations required to bridge higher taxonomic levels are impossible without design—what he calls the "edge of evolution"—arguing that the probability of multiple simultaneous beneficial mutations is extremely low and that vast numbers of microbes have produced little in the way of new proteins or binding sites. He acknowledges that his support for intelligent design is a minority view within the scientific community. He implies that, for this reason, he avoids detailed discussion about the nature of life's designer and takes deliberate steps to distinguish himself from the Young Earth creationism movement. Reviews by scientists, especially biologists, have been highly critical: many rejected Behe's methods, evidence and conclusions, although some creationists and a few biologists were more positive. Richard Dawkins of the University of Oxford reviewed the book, focusing on Behe's claim that random mutation, rather than nonrandom natural selection, is the driving force behind evolution. Dawkins disputed Behe's assertion that no amount of random mutation could produce today's biological diversity, citing examples of selective breeding. He also argued that Behe had failed to engage with the relevant scientific research, that his work would not pass peer review in a scientific journal, and that he had bypassed peer review by publishing a popular book aimed at the general public rather than a scientific audience. Prominent biologists writing in The New Republic, Science and Nature made similar points, noting that Behe accepts much of evolutionary theory but replaces random mutation with guided mutations by an unnamed designer. Other reviewers criticized Behe for quote mining, failing to offer a theory of intelligent design despite the ten years since Darwin's Black Box, asserting a contradiction between design and "unbroken natural law," proposing an erroneous model, and ignoring publications and data that contradict his claims. Specific criticisms included using irrelevant calculations as sources, asserting that simultaneous mutations are necessary when evidence supports cumulative mutations, and neglecting the scientific literature on protein evolution. Michael Ruse, professor of philosophy at Florida State University, wrote that the book offered no developments beyond Darwin's Black Box, repeated earlier arguments, and dismissed opposing views without analysis; others expressed similar views.
The Edge of Evolution: The Search for the Limits of Darwinism is a two thousand seven intelligent-design book by Discovery Institute fellow Michael Behe, published by The Free Press. Behe argues that while evolution can produce changes within species, there is a limit to evolution's ability to generate diversity; this limit, which he calls the "edge of evolution," lies somewhere between the species and order levels. On this basis he argues that known evolutionary mechanisms cannot account for all observed diversification since the last universal ancestor and that intervention by an intelligent designer could adequately explain much of life's diversity. It is Behe's second intelligent-design book; his first was Darwin's Black Box. While the book was well received by creationists and some non-biologists, many scientists—particularly biologists—have been highly critical of Behe's methods, evidence, and conclusions. Behe begins by observing that the theory of evolution comprises three related ideas—common descent, natural selection, and random mutation. He says he accepts common descent and natural selection without question but questions the scope and power of random mutation to produce beneficial changes that lead to novel, useful structures and processes. He uses the term "Darwinian evolution" to denote evolution that relies on all three of the factors and labels scientists who regard it as the only form of evolution "Darwinists"; he says they take exception to intelligent design and to other theistic and non-theistic complexity theories. Behe's central claim is that Darwinian evolution exists but is better at modifying existing metabolic pathways (or "molecular machinery") than creating new ones, and therefore plays only a limited role in the development and diversification of life. He examines genetic changes in the Plasmodium (malaria parasite) and human genomes as they respond to each other's defenses, describing the situation as "trench warfare, not an arms race." He contrasts this hemoglobin-destroying, protein-pump-compromising "war by attrition" with the "creative process" required to develop complex structures such as the bacterial flagellum and highly complex systems like the immune system. Behe calculates the "edge of evolution"—the point at which Darwinian evolution ceases to be an effective agent of creative biological change—by considering the number of mutations needed to move from one genetic state to another and the population size of the organism. He concludes that purposeful design plays a major role in developing biological complexity by producing non-random mutations that are then sculpted by natural selection. Design favoring intelligent life, he argues, is supported not only by recent findings about biological complexity but also by discoveries in chemistry (for example, the life‑supporting structure of water) and in cosmology (the anthropic principle). He strongly defends common descent for all life on Earth, including a common ancestor for humans and chimpanzees, and says the evidence is so overwhelming it should be obvious, even "trivial." Behe contends that the mutations required to bridge higher taxonomic levels are impossible without design—what he calls the "edge of evolution"—arguing that the probability of multiple simultaneous beneficial mutations is extremely low and that vast numbers of microbes have produced little in the way of new proteins or binding sites. He acknowledges that his support for intelligent design is a minority view within the scientific community. He implies that, for this reason, he avoids detailed discussion about the nature of life's designer and takes deliberate steps to distinguish himself from the Young Earth creationism movement. Reviews by scientists, especially biologists, have been highly critical: many rejected Behe's methods, evidence and conclusions, although some creationists and a few biologists were more positive. Richard Dawkins of the University of Oxford reviewed the book, focusing on Behe's claim that random mutation, rather than nonrandom natural selection, is the driving force behind evolution. Dawkins disputed Behe's assertion that no amount of random mutation could produce today's biological diversity, citing examples of selective breeding. He also argued that Behe had failed to engage with the relevant scientific research, that his work would not pass peer review in a scientific journal, and that he had bypassed peer review by publishing a popular book aimed at the general public rather than a scientific audience. Prominent biologists writing in The New Republic, Science and Nature made similar points, noting that Behe accepts much of evolutionary theory but replaces random mutation with guided mutations by an unnamed designer. Other reviewers criticized Behe for quote mining, failing to offer a theory of intelligent design despite the ten years since Darwin's Black Box, asserting a contradiction between design and "unbroken natural law," proposing an erroneous model, and ignoring publications and data that contradict his claims. Specific criticisms included using irrelevant calculations as sources, asserting that simultaneous mutations are necessary when evidence supports cumulative mutations, and neglecting the scientific literature on protein evolution. Michael Ruse, professor of philosophy at Florida State University, wrote that the book offered no developments beyond Darwin's Black Box, repeated earlier arguments, and dismissed opposing views without analysis; others expressed similar views.
long_en_258
news_en
954
en
This is the first study to look at pregnancy this way. The length of pregnancy can vary naturally by as much as five weeks, research suggests. The study of 125 women is the first to calculate gestation by pinpointing the exact time of conception. It found age, time to implantation and a woman's own birth weight were also linked to pregnancy length. An expert said the findings, published in the journal Human Reproduction, challenged whether a "due date" for women was helpful. Due dates are often calculated as 280 days after the start of the woman's last menstrual period or, more accurately, by ultrasound. Yet only 4% of women deliver when predicted, and only 70% within 10 days of their estimated due date. The research team at the US National Institute of Environmental Health Sciences measured hormone concentrations in daily urine samples taken from women trying to conceive naturally to determine exactly when ovulation and implantation of the fertilised egg had occurred. They found that the average length from ovulation to birth was 268 days, just over 38 weeks. Once they excluded six premature births, they found that gestation varied naturally by as much as 37 days. Dr Anne Marie Jukic said: "We were a bit surprised by this finding. We know that length of gestation varies among women, but some part of that variation has always been attributed to errors in the assignment of gestational age." "This is a very interesting piece of work, and knowing when is the right time to deliver is a huge issue," said Dr Virginia Beckett, spokesperson for the Royal College of Obstetricians and Gynaecologists. "Our measure of length of gestation does not include these sources of error, and yet there is still five weeks of variability. It's fascinating." The study also showed that embryos that took longer to implant also took longer from implantation to delivery. Older women were more likely to have longer pregnancies, and there was also a link between gestation and a mother's weight when she was born. The researchers also found that the length of previous or subsequent pregnancies was related to the length of the one being studied, suggesting a consistency about when women deliver. But they said it was too early to make any clinical recommendations. "I think the best that can be said is that natural variability may be greater than we have previously thought and, if that is true, clinicians may want to keep that in mind when trying to decide whether to intervene on a pregnancy," said Dr Jukic. Dr Beckett said very little was known about the exact mechanisms that determine when labour begins. This is a very interesting piece of work, and knowing when is the right time to deliver is a huge issue, she added. It supports the suggestion that giving someone a "due date" may not be a great idea and can make women feel anxious when they go over. "It would be better to say, 'You will be delivered by this time,'" she said, "to take the pressure off." How long a healthy pregnancy lasts can vary by as much as five weeks, even when doctors precisely determined the date of conception, a new study suggests. Although the length of a healthy pregnancy is known to be variable, some of this variation was thought to be due to errors in determining the age of the baby, the researchers said. The new study was able to pinpoint the exact day of conception by analyzing urine samples from 125 women who were trying to become pregnant in the early 1980s. Changes in hormone levels in the urine were used to determine the day of ovulation—presumed to be the same day as conception—as well as the day the embryo implanted in the uterus. On average, pregnancies lasted 38 weeks from the day of conception to the day the baby was born, or about 40 to 41 weeks from the day of the women's last menstrual period. The latter measure is more commonly used to determine a woman's due date. But even after excluding babies born preterm, pregnancy length ranged from about 35 to 40 weeks from conception to birth (about 38 to 43 weeks from the last menstrual period). The researchers were surprised to see such variation even with a precise determination of the day of conception, said study researcher Dr. Anne Marie Jukic, a postdoctoral fellow in the Epidemiology Branch at the National Institute of Environmental Health Sciences in Durham, N.C. The findings suggest that giving a woman a precise due date may not be the best way to communicate pregnancy duration. Only about 4 percent of women actually deliver on their due date, which is typically estimated as 280 days after the last menstrual period. "The emphasis on a single due date may make the length of pregnancy seem more predictable than it really is," Jukic said. Providing women with a range of due dates may be a better way to communicate pregnancy length, she added. The length of participants' previous pregnancies was also strongly linked to the length of their current pregnancy, suggesting that this measure might help determine a woman's natural pregnancy length, the researchers said. The study also found that early-pregnancy characteristics may help predict due dates: embryos that took longer to implant were associated with later deliveries, while pregnancies in which embryos showed a late rise in progesterone were about 12 days shorter than those with an early rise. However, some experts criticized the study. Dr. Tomer Singer, a reproductive endocrinologist and infertility specialist at Lenox Hill Hospital in New York, said it did not provide much new information.
This is the first study to look at pregnancy this way. The length of pregnancy can vary naturally by as much as five weeks, research suggests. The study of one hundred twenty-five women is the first to calculate gestation by pinpointing the exact time of conception. It found age, time to implantation and a woman's own birth weight were also linked to pregnancy length. An expert said the findings, published in the journal Human Reproduction, challenged whether a "due date" for women was helpful. Due dates are often calculated as two hundred eighty days after the start of the woman's last menstrual period or, more accurately, by ultrasound. Yet only four percent of women deliver when predicted, and only seventy percent within ten days of their estimated due date. The research team at the US National Institute of Environmental Health Sciences measured hormone concentrations in daily urine samples taken from women trying to conceive naturally to determine exactly when ovulation and implantation of the fertilised egg had occurred. They found that the average length from ovulation to birth was two hundred sixty-eight days, just over thirty-eight weeks. Once they excluded six premature births, they found that gestation varied naturally by as much as thirty-seven days. Dr Anne Marie Jukic said: "We were a bit surprised by this finding." We know that length of gestation varies among women, but some part of that variation has always been attributed to errors in the assignment of gestational age." "This is a very interesting piece of work, and knowing when is the right time to deliver is a huge issue," said Dr Virginia Beckett, spokesperson for the Royal College of Obstetricians and Gynaecologists. "Our measure of length of gestation does not include these sources of error, and yet there is still five weeks of variability. It's fascinating." The study also showed that embryos that took longer to implant also took longer from implantation to delivery. Older women were more likely to have longer pregnancies, and there was also a link between gestation and a mother's weight when she was born. The researchers also found that the length of previous or subsequent pregnancies was related to the length of the one being studied, suggesting a consistency about when women deliver. But they said it was too early to make any clinical recommendations. "I think the best that can be said is that natural variability may be greater than we have previously thought and, if that is true, clinicians may want to keep that in mind when trying to decide whether to intervene on a pregnancy," said Dr Jukic. Dr Beckett said very little was known about the exact mechanisms that determine when labour begins. This is a very interesting piece of work, and knowing when is the right time to deliver is a huge issue, she added. It supports the suggestion that giving someone a "due date" may not be a great idea and can make women feel anxious when they go over. "It would be better to say, 'You will be delivered by this time,'" she said, "to take the pressure off." How long a healthy pregnancy lasts can vary by as much as five weeks, even when doctors precisely determined the date of conception, a new study suggests. Although the length of a healthy pregnancy is known to be variable, some of this variation was thought to be due to errors in determining the age of the baby, the researchers said. The new study was able to pinpoint the exact day of conception by analyzing urine samples from one hundred twenty-five women who were trying to become pregnant in the early nineteen eighties. Changes in hormone levels in the urine were used to determine the day of ovulation—presumed to be the same day as conception—as well as the day the embryo implanted in the uterus. On average, pregnancies lasted thirty-eight weeks from the day of conception to the day the baby was born, or about forty to forty-one weeks from the day of the women's last menstrual period. The latter measure is more commonly used to determine a woman's due date. But even after excluding babies born preterm, pregnancy length ranged from about thirty-five to forty weeks from conception to birth (about thirty-eight to forty-three weeks from the last menstrual period). The researchers were surprised to see such variation even with a precise determination of the day of conception, said study researcher Dr. Anne Marie Jukic, a postdoctoral fellow in the Epidemiology Branch at the National Institute of Environmental Health Sciences in Durham, N.C. The findings suggest that giving a woman a precise due date may not be the best way to communicate pregnancy duration. Only about four percent of women actually deliver on their due date, which is typically estimated as two hundred eighty days after the last menstrual period. "The emphasis on a single due date may make the length of pregnancy seem more predictable than it really is," Jukic said. Providing women with a range of due dates may be a better way to communicate pregnancy length, she added. The length of participants' previous pregnancies was also strongly linked to the length of their current pregnancy, suggesting that this measure might help determine a woman's natural pregnancy length, the researchers said. The study also found that early-pregnancy characteristics may help predict due dates: embryos that took longer to implant were associated with later deliveries, while pregnancies in which embryos showed a late rise in progesterone were about twelve days shorter than those with an early rise. However, some experts criticized the study. Dr. Tomer Singer, a reproductive endocrinologist and infertility specialist at Lenox Hill Hospital in New York, said it did not provide much new information.
long_en_230
news_en
632
en
Golf — Sharma shows potential despite faltering at final hurdle March 5, 2018 (Reuters) - Shubhankar Sharma’s hopes of winning on his PGA Tour debut were dashed by a torrid final round on Sunday, but the 21-year-old Indian’s performance at the WGC-Mexico Championship has made the golfing world sit up and take notice. Sharma teed off in the final group along with Phil Mickelson and Tyrrell Hatton at Club de Golf Chapultepec, holding a two-shot lead and needing one more great round to become the youngest winner of a World Golf Championships event, as well as the first to win in his first WGC start. However, his lead soon vanished, and a closing stretch of four bogeys in his last six holes condemned him to a three-over 74 in the final round, pushing him down to a tie for ninth on 274. “A little bit disappointed, I was leading and I think I couldn’t finish it off today,” said Sharma. “But that’s what the game is about, and what I learned, especially playing with Phil, I’ll cherish it forever.” Sharma was six when he went to a golf course for the first time and got hooked on the sport. His father quit the Indian Army to help his son chase his golfing goals. He has already tasted victory this season with European Tour wins in South Africa and Malaysia, and his Joburg Open win earned him a ticket to this year’s British Open at Carnoustie. Sharma, who lives in Chandigarh although his family hails from the northern Indian state of Jammu and Kashmir, is the country’s highest-ranked golfer, at No. 66. A top-50 spot is needed to punch his ticket to next month’s U.S. Masters, but the tournament has in the past invited a non-exempt Asian player, raising the possibility that Sharma might make Augusta National without qualifying automatically. Five-times major champion Mickelson, who defeated Justin Thomas on the first playoff hole to win the event, had mistaken Sharma for a member of the media on Saturday, but will no doubt be able to put a name to the face in the future. “I saw how well he struck the golf ball. He hit a beautiful tee shot on one; you can tell he can really play,” Mickelson said. “I saw some of the putts, some of the highlights with the putter. I know he’s a very talented player, and I believe he’s leading the Order of Merit on the European Tour, so I know what a great player Mr. Sharma is. I probably shouldn’t say that—he’s 26 years younger than me!” With his two European Tour victories, Sharma seems poised for a breakout year, and compatriot Arjun Atwal said he has the mental game to match his golf skills. “To me, he has a very calm attitude. He doesn’t get flustered; he takes everything in his stride, and that’s what I’ve always noticed about him,” said the first Indian winner on the PGA Tour. “He’s been very level-headed since I’ve known him. I can’t see him being upset or cussing.” It's amazing to see what he's doing at the age of 21. "Shubhankar has the type of game where you can't pinpoint what he needs to improve. He's not exceptionally strong in any one area, but every part of his game is very good. He's like Steve Stricker or Jim Furyk. When you play with these guys, they don't do anything exceptional, like 'wow', but everything in their game is good. There's nothing bad at all." (Reporting by Sudipto Ganguly in Mumbai; editing by Peter Rutherford.)
Golf — Sharma shows potential despite faltering at final hurdle March five, two thousand eighteen (Reuters) - Shubhankar Sharma’s hopes of winning on his PGA Tour debut were dashed by a torrid final round on Sunday, but the twenty one-year-old Indian’s performance at the WGC-Mexico Championship has made the golfing world sit up and take notice. Sharma teed off in the final group along with Phil Mickelson and Tyrrell Hatton at Club de Golf Chapultepec, holding a two-shot lead and needing one more great round to become the youngest winner of a World Golf Championships event, as well as the first to win in his first WGC start. However, his lead soon vanished, and a closing stretch of four bogeys in his last six holes condemned him to a three-over seventy four in the final round, pushing him down to a tie for ninth on two hundred seventy four. “A little bit disappointed, I was leading and I think I couldn’t finish it off today,” said Sharma. “But that’s what the game is about, and what I learned, especially playing with Phil, I’ll cherish it forever.” Sharma was six when he went to a golf course for the first time and got hooked on the sport. His father quit the Indian Army to help his son chase his golfing goals. He has already tasted victory this season with European Tour wins in South Africa and Malaysia, and his Joburg Open win earned him a ticket to this year’s British Open at Carnoustie. Sharma, who lives in Chandigarh although his family hails from the northern Indian state of Jammu and Kashmir, is the country’s highest-ranked golfer, at No. sixty-six. A top-fifty spot is needed to punch his ticket to next month’s U.S. Masters, but the tournament has in the past invited a non-exempt Asian player, raising the possibility that Sharma might make Augusta National without qualifying automatically. Five-times major champion Mickelson, who defeated Justin Thomas on the first playoff hole to win the event, had mistaken Sharma for a member of the media on Saturday, but will no doubt be able to put a name to the face in the future. “I saw how well he struck the golf ball. He hit a beautiful tee shot on one; you can tell he can really play,” Mickelson said. “I saw some of the putts, some of the highlights with the putter. I know he’s a very talented player, and I believe he’s leading the Order of Merit on the European Tour, so I know what a great player Mr. Sharma is. I probably shouldn’t say that—he’s twenty-six years younger than me!” With his two European Tour victories, Sharma seems poised for a breakout year, and compatriot Arjun Atwal said he has the mental game to match his golf skills. “To me, he has a very calm attitude. He doesn’t get flustered; he takes everything in his stride, and that’s what I’ve always noticed about him,” said the first Indian winner on the PGA Tour. “He’s been very level-headed since I’ve known him. I can’t see him being upset or cussing.” It's amazing to see what he's doing at the age of twenty one. "Shubhankar has the type of game where you can't pinpoint what he needs to improve. He's not exceptionally strong in any one area, but every part of his game is very good. He's like Steve Stricker or Jim Furyk. When you play with these guys, they don't do anything exceptional, like 'wow', but everything in their game is good. There's nothing bad at all." (Reporting by Sudipto Ganguly in Mumbai; editing by Peter Rutherford.).
long_en_120
paper_en
2,278
en
Diffusion models are latent variable models of the form p theta of x_0 is defined as the integral of p theta of x_0 through x_T with respect to x_1 through x_T, where x_1 through x_T are latents of the same dimensionality as the data x_0 sampled from q of x_0. The joint distribution p theta of x_0 through x_T is called the reverse process, and it is defined as a Markov chain with learned Gaussian transitions starting at p of x_T, which is a standard normal distribution. What distinguishes diffusion models from other types of latent variable models is that the approximate posterior q of x_1 through x_T given x_0, called the forward process or diffusion process, is fixed to a Markov chain that gradually adds Gaussian noise to the data according to a variance schedule beta_1 through beta_T. Training is performed by optimizing the usual variational bound on negative log likelihood. The forward process variances beta_t can be learned by reparameterization or held constant as hyperparameters, and expressiveness of the reverse process is ensured in part by the choice of Gaussian conditionals in p theta of x_t-1 given x_t, because both processes have the same functional form when beta_t are small. A notable property of the forward process is that it admits sampling x_t at an arbitrary timestep t in closed form: using the notation alpha_t is defined as 1 minus beta_t and alpha_bar_t is defined as the product of alpha_s from s equals 1 to t. Efficient training is therefore possible by optimizing random terms of L with stochastic gradient descent. Further improvements come from variance reduction by rewriting L. This rewritten form uses KL divergence to directly compare p theta of x_t-1 given x_t against forward process posteriors, which are tractable when conditioned on x_0. Consequently, all KL divergences are comparisons between Gaussians, so they can be calculated in a Rao-Blackwellized fashion with closed form expressions instead of high variance Monte Carlo estimates. Section 3: Diffusion models and denoising autoencoders Diffusion models might appear to be a restricted class of latent variable models, but they allow a large number of degrees of freedom in implementation. One must choose the variances beta_t of the forward process and the model architecture and Gaussian distribution parameterization of the reverse process. To guide our choices, we establish a new explicit connection between diffusion models and denoising score matching that leads to a simplified, weighted variational bound objective for diffusion models. Ultimately, our model design is justified by simplicity and empirical results. Our discussion is categorized by the terms of the variational bound. Subsection 3.1: Forward process and L_T We ignore the fact that the forward process variances beta_t are learnable by reparameterization and instead fix them to constants. Thus, in our implementation, the approximate posterior q has no learnable parameters, so L_T is a constant during training and can be ignored. Subsection 3.2: Reverse process and L_1 through L_T-1 Now we discuss our choices in the reverse process for t between 1 and T. First, we set the covariance matrix to untrained time dependent constants. Experimentally, both sigma_t squared equals beta_t and sigma_t squared equals a scaled version of beta_t had similar results. The first choice is optimal for x_0 sampled from a standard normal distribution, and the second is optimal for x_0 deterministically set to one point. These are the two extreme choices corresponding to upper and lower bounds on reverse process entropy for data with coordinatewise unit variance. Second, to represent the mean, we propose a specific parameterization motivated by the following analysis of L_t. We see that the most straightforward parameterization of the mean is a model that predicts the forward process posterior mean. However, we can expand this term further. This reveals that the mean must predict a specific function given x_t. Since x_t is available as input to the model, we may choose a parameterization where a function approximator is intended to predict epsilon from x_t. To sample x_t-1 is to compute a value based on x_t, the function approximator, and some added noise. The complete sampling procedure resembles Langevin dynamics with the function approximator as a learned gradient of the data density. Furthermore, with this parameterization, the variational bound term simplifies to a form which resembles denoising score matching over multiple noise scales indexed by t. As this term is equal to one term of the variational bound for the Langevin-like reverse process, we see that optimizing an objective resembling denoising score matching is equivalent to using variational inference to fit the finite-time marginal of a sampling chain resembling Langevin dynamics. To summarize, we can train the reverse process mean function approximator to predict the forward process posterior mean, or by modifying its parameterization, we can train it to predict epsilon. (There is also the possibility of predicting x_0, but we found this to lead to worse sample quality early in our experiments.) We have shown that the epsilon-prediction parameterization both resembles Langevin dynamics and simplifies the diffusion model's variational bound to an objective that resembles denoising score matching. Nonetheless, it is just another parameterization of the reverse process, so we verify its effectiveness in our experiments in an ablation where we compare predicting epsilon against predicting the forward process posterior mean. Subsection 3.3: Data scaling, reverse process decoder, and L_0 We assume that image data consists of integers from 0 to 255 scaled linearly to the range of -1 to 1. This ensures that the neural network reverse process operates on consistently scaled inputs starting from the standard normal prior. To obtain discrete log likelihoods, we set the last term of the reverse process to an independent discrete decoder derived from a Gaussian distribution. (It would be straightforward to instead incorporate a more powerful decoder like a conditional autoregressive model, but we leave that to future work.) Similar to the discretized continuous distributions used in VAE decoders and autoregressive models, our choice here ensures that the variational bound is a lossless codelength of discrete data, without need of adding noise to the data or incorporating the Jacobian of the scaling operation into the log likelihood. At the end of sampling, we display the mean of the distribution at time 1 noiselessly. Subsection 3.4: Simplified training objective With the reverse process and decoder defined above, the variational bound is clearly differentiable with respect to theta and is ready to be employed for training. However, we found it beneficial to sample quality (and simpler to implement) to train on a variant of the variational bound. This variant is a simplified objective where t is uniform between 1 and T. The t equals 1 case corresponds to L_0. The t greater than 1 cases correspond to an unweighted version of the variational bound term, analogous to the loss weighting used by the NCSN denoising score matching model. (L_T does not appear because the forward process variances beta_t are fixed.) The complete training procedure uses this simplified objective. Since our simplified objective discards the weighting in the original term, it is a weighted variational bound that emphasizes different aspects of reconstruction compared to the standard variational bound. In particular, our diffusion process setup causes the simplified objective to down-weight loss terms corresponding to small t. These terms train the network to denoise data with very small amounts of noise, so it is beneficial to down-weight them so that the network can focus on more difficult denoising tasks at larger t terms. We will see in our experiments that this reweighting leads to better sample quality. Section 4: Experiments We set T equals 1000 for all experiments so that the number of neural network evaluations needed during sampling matches previous work. We set the forward process variances to constants increasing linearly from beta_1 equals 10 to the power of -4 to beta_T equals 0.02. These constants were chosen to be small relative to data scaled to the range of -1 to 1, ensuring that reverse and forward processes have approximately the same functional form while keeping the signal-to-noise ratio at x_T as small as possible. To represent the reverse process, we use a U-Net backbone similar to an unmasked PixelCNN++ with group normalization throughout. Parameters are shared across time, which is specified to the network using the Transformer sinusoidal position embedding. We use self-attention at the 16 by 16 feature map resolution. Subsection 4.1: Sample quality On CIFAR10, with our FID score of 3.17, our unconditional model achieves better sample quality than most models in the literature, including class conditional models. Our FID score is computed with respect to the training set, as is standard practice; when we compute it with respect to the test set, the score is 5.24, which is still better than many of the training set FID scores in the literature. We find that training our models on the true variational bound yields better codelengths than training on the simplified objective, as expected, but the latter yields the best sample quality. We also generated samples for CIFAR10, CelebA-HQ 256 by 256, and LSUN 256 by 256. Subsection 4.2: Reverse process parameterization and training objective ablation We show the sample quality effects of reverse process parameterizations and training objectives. We find that the baseline option of predicting the forward process posterior mean works well only when trained on the true variational bound instead of unweighted mean squared error, a simplified objective. We also see that learning reverse process variances leads to unstable training and poorer sample quality compared to fixed variances. Predicting epsilon, as we proposed, performs approximately as well as predicting the posterior mean when trained on the variational bound with fixed variances, but much better when trained with our simplified objective. Subsection 4.3: Progressive coding The codelengths of our CIFAR10 models show a gap between train and test of at most 0.03 bits per dimension, which is comparable to the gaps reported with other likelihood-based models and indicates that our diffusion model is not overfitting. Still, while our lossless codelengths are better than the large estimates reported for energy based models and score matching using annealed importance sampling, they are not competitive with other types of likelihood-based generative models. Since our samples are nonetheless of high quality, we conclude that diffusion models have an inductive bias that makes them excellent lossy compressors. Treating the variational bound terms L_1 through L_T as rate and L_0 as distortion, our CIFAR10 model with the highest quality samples has a rate of 1.78 bits per dimension and a distortion of 1.97 bits per dimension, which amounts to a root mean squared error of 0.95 on a scale from 0 to 255. More than half of the lossless codelength describes imperceptible distortions. Paragraph: Progressive lossy compression. We can probe further into the rate-distortion behavior of our model by introducing a progressive lossy code. This code assumes access to a procedure that can transmit a sample using approximately a certain number of bits on average for any distributions p and q, for which only p is available to the receiver beforehand. When applied to a data sample, this procedure transmits x_T through x_0 in sequence using a total expected codelength equal to the variational bound. The receiver, at any time t, has the partial information x_t fully available and can progressively estimate x_0. A rate-distortion plot on the CIFAR10 test set shows the resulting behavior. At each time t, the distortion is calculated as the root mean squared error, and the rate is calculated as the cumulative number of bits received so far. The distortion decreases steeply in the low-rate region of the rate-distortion plot, indicating that the majority of the bits are indeed allocated to imperceptible distortions. Paragraph: Progressive generation. We also run a progressive unconditional generation process given by progressive decompression from random bits. In other words, we predict the result of the reverse process, a predicted x_0, while sampling from the reverse process. The resulting sample quality of the predicted x_0 over the course of the reverse process shows that large scale image features appear first and details appear last. Stochastic predictions of x_0 given x_t for various t show that when t is small, all but fine details are preserved, and when t is large, only large scale features are preserved. Perhaps these are hints of conceptual compression. Paragraph: Connection to autoregressive decoding. Note that the variational bound can be rewritten in an alternate form. Now consider setting the diffusion process length T to the dimensionality of the data, defining the forward process so that it masks out coordinates one by one, setting the prior to place all mass on a blank image, and taking the reverse process to be a fully expressive conditional distribution. With these choices, minimizing the KL divergence trains the model to predict the masked coordinate given the others. Thus, training with this particular diffusion is training an autoregressive model. We can therefore interpret the Gaussian diffusion model as a kind of autoregressive model with a generalized bit ordering that cannot be expressed by reordering data coordinates.
Diffusion models are latent variable models of the form p theta of x zero is defined as the integral of p theta of x zero through x T with respect to x one through x T, where x one through x T are latents of the same dimensionality as the data x zero sampled from q of x zero. The joint distribution p theta of x zero through x T is called the reverse process, and it is defined as a Markov chain with learned Gaussian transitions starting at p of x T, which is a standard normal distribution. What distinguishes diffusion models from other types of latent variable models is that the approximate posterior q of x one through x T given x zero, called the forward process or diffusion process, is fixed to a Markov chain that gradually adds Gaussian noise to the data according to a variance schedule beta one through beta T. Training is performed by optimizing the usual variational bound on negative log likelihood. The forward process variances beta t can be learned by reparameterization or held constant as hyperparameters, and expressiveness of the reverse process is ensured in part by the choice of Gaussian conditionals in p theta of x t minus one given x t, because both processes have the same functional form when beta t are small. A notable property of the forward process is that it admits sampling x t at an arbitrary timestep t in closed form: using the notation alpha t is defined as one minus beta t and alpha bar t is defined as the product of alpha s from s equals one to t. Efficient training is therefore possible by optimizing random terms of L with stochastic gradient descent. Further improvements come from variance reduction by rewriting L. This rewritten form uses KL divergence to directly compare p theta of x t minus one given x t against forward process posteriors, which are tractable when conditioned on x zero. Consequently, all KL divergences are comparisons between Gaussians, so they can be calculated in a Rao-Blackwellized fashion with closed form expressions instead of high variance Monte Carlo estimates. Section three: Diffusion models and denoising autoencoders Diffusion models might appear to be a restricted class of latent variable models, but they allow a large number of degrees of freedom in implementation. One must choose the variances beta t of the forward process and the model architecture and Gaussian distribution parameterization of the reverse process. To guide our choices, we establish a new explicit connection between diffusion models and denoising score matching that leads to a simplified, weighted variational bound objective for diffusion models. Ultimately, our model design is justified by simplicity and empirical results. Our discussion is categorized by the terms of the variational bound. Subsection three point one: Forward process and L T We ignore the fact that the forward process variances beta t are learnable by reparameterization and instead fix them to constants. Thus, in our implementation, the approximate posterior q has no learnable parameters, so L T is a constant during training and can be ignored. Subsection three point two: Reverse process and L one through L T minus one Now we discuss our choices in the reverse process for t between one and T. First, we set the covariance matrix to untrained time dependent constants. Experimentally, both sigma t squared equals beta t and sigma t squared equals a scaled version of beta t had similar results. The first choice is optimal for x zero sampled from a standard normal distribution, and the second is optimal for x zero deterministically set to one point. These are the two extreme choices corresponding to upper and lower bounds on reverse process entropy for data with coordinatewise unit variance. Second, to represent the mean, we propose a specific parameterization motivated by the following analysis of L t. We see that the most straightforward parameterization of the mean is a model that predicts the forward process posterior mean. However, we can expand this term further. This reveals that the mean must predict a specific function given x t. Since x t is available as input to the model, we may choose a parameterization where a function approximator is intended to predict epsilon from x t. To sample x t minus one is to compute a value based on x t, the function approximator, and some added noise. The complete sampling procedure resembles Langevin dynamics with the function approximator as a learned gradient of the data density. Furthermore, with this parameterization, the variational bound term simplifies to a form which resembles denoising score matching over multiple noise scales indexed by t. As this term is equal to one term of the variational bound for the Langevin-like reverse process, we see that optimizing an objective resembling denoising score matching is equivalent to using variational inference to fit the finite-time marginal of a sampling chain resembling Langevin dynamics. To summarize, we can train the reverse process mean function approximator to predict the forward process posterior mean, or by modifying its parameterization, we can train it to predict epsilon. (There is also the possibility of predicting x zero, but we found this to lead to worse sample quality early in our experiments.) We have shown that the epsilon-prediction parameterization both resembles Langevin dynamics and simplifies the diffusion model's variational bound to an objective that resembles denoising score matching. Nonetheless, it is just another parameterization of the reverse process, so we verify its effectiveness in our experiments in an ablation where we compare predicting epsilon against predicting the forward process posterior mean. Subsection three point three: Data scaling, reverse process decoder, and L zero We assume that image data consists of integers from zero to two hundred fifty five scaled linearly to the range of negative one to one. This ensures that the neural network reverse process operates on consistently scaled inputs starting from the standard normal prior. To obtain discrete log likelihoods, we set the last term of the reverse process to an independent discrete decoder derived from a Gaussian distribution. (It would be straightforward to instead incorporate a more powerful decoder like a conditional autoregressive model, but we leave that to future work.) Similar to the discretized continuous distributions used in VAE decoders and autoregressive models, our choice here ensures that the variational bound is a lossless codelength of discrete data, without need of adding noise to the data or incorporating the Jacobian of the scaling operation into the log likelihood. At the end of sampling, we display the mean of the distribution at time one noiselessly. Subsection three point four: Simplified training objective With the reverse process and decoder defined above, the variational bound is clearly differentiable with respect to theta and is ready to be employed for training. However, we found it beneficial to sample quality (and simpler to implement) to train on a variant of the variational bound. This variant is a simplified objective where t is uniform between one and T. The t equals one case corresponds to L zero. The t is greater than one cases correspond to an unweighted version of the variational bound term, analogous to the loss weighting used by the NCSN denoising score matching model. (L T does not appear because the forward process variances beta t are fixed.) The complete training procedure uses this simplified objective. Since our simplified objective discards the weighting in the original term, it is a weighted variational bound that emphasizes different aspects of reconstruction compared to the standard variational bound. In particular, our diffusion process setup causes the simplified objective to down-weight loss terms corresponding to small t. These terms train the network to denoise data with very small amounts of noise, so it is beneficial to down-weight them so that the network can focus on more difficult denoising tasks at larger t terms. We will see in our experiments that this reweighting leads to better sample quality. Section four: Experiments We set T equals one thousand for all experiments so that the number of neural network evaluations needed during sampling matches previous work. We set the forward process variances to constants increasing linearly from beta one equals ten to the power of negative four to beta T equals zero point zero two. These constants were chosen to be small relative to data scaled to the range of negative one to one, ensuring that reverse and forward processes have approximately the same functional form while keeping the signal-to-noise ratio at x T as small as possible. To represent the reverse process, we use a U-Net backbone similar to an unmasked PixelCNN++ with group normalization throughout. Parameters are shared across time, which is specified to the network using the Transformer sinusoidal position embedding. We use self-attention at the sixteen by sixteen feature map resolution. Subsection four point one: Sample quality On CIFAR10, with our FID score of three point one seven, our unconditional model achieves better sample quality than most models in the literature, including class conditional models. Our FID score is computed with respect to the training set, as is standard practice; when we compute it with respect to the test set, the score is five point two four, which is still better than many of the training set FID scores in the literature. We find that training our models on the true variational bound yields better codelengths than training on the simplified objective, as expected, but the latter yields the best sample quality. We also generated samples for CIFAR10, CelebA-HQ two hundred fifty-six by two hundred fifty-six, and LSUN two hundred fifty-six by two hundred fifty-six. Subsection four point two: Reverse process parameterization and training objective ablation We show the sample quality effects of reverse process parameterizations and training objectives. We find that the baseline option of predicting the forward process posterior mean works well only when trained on the true variational bound instead of unweighted mean squared error, a simplified objective. We also see that learning reverse process variances leads to unstable training and poorer sample quality compared to fixed variances. Predicting epsilon, as we proposed, performs approximately as well as predicting the posterior mean when trained on the variational bound with fixed variances, but much better when trained with our simplified objective. Subsection four point three: Progressive coding The codelengths of our CIFAR10 models show a gap between train and test of at most zero point zero three bits per dimension, which is comparable to the gaps reported with other likelihood-based models and indicates that our diffusion model is not overfitting. Still, while our lossless codelengths are better than the large estimates reported for energy based models and score matching using annealed importance sampling, they are not competitive with other types of likelihood-based generative models. Since our samples are nonetheless of high quality, we conclude that diffusion models have an inductive bias that makes them excellent lossy compressors. Treating the variational bound terms L one through L T as rate and L zero as distortion, our CIFAR10 model with the highest quality samples has a rate of one point seven eight bits per dimension and a distortion of one point nine seven bits per dimension, which amounts to a root mean squared error of zero point nine five on a scale from zero to two hundred fifty five. More than half of the lossless codelength describes imperceptible distortions. Paragraph: Progressive lossy compression. We can probe further into the rate-distortion behavior of our model by introducing a progressive lossy code. This code assumes access to a procedure that can transmit a sample using approximately a certain number of bits on average for any distributions p and q, for which only p is available to the receiver beforehand. When applied to a data sample, this procedure transmits x T through x zero in sequence using a total expected codelength equal to the variational bound. The receiver, at any time t, has the partial information x t fully available and can progressively estimate x zero. A rate-distortion plot on the CIFAR10 test set shows the resulting behavior. At each time t, the distortion is calculated as the root mean squared error, and the rate is calculated as the cumulative number of bits received so far. The distortion decreases steeply in the low-rate region of the rate-distortion plot, indicating that the majority of the bits are indeed allocated to imperceptible distortions. Paragraph: Progressive generation. We also run a progressive unconditional generation process given by progressive decompression from random bits. In other words, we predict the result of the reverse process, a predicted x zero, while sampling from the reverse process. The resulting sample quality of the predicted x zero over the course of the reverse process shows that large scale image features appear first and details appear last. Stochastic predictions of x zero given x t for various t show that when t is small, all but fine details are preserved, and when t is large, only large scale features are preserved. Perhaps these are hints of conceptual compression. Paragraph: Connection to autoregressive decoding. Note that the variational bound can be rewritten in an alternate form. Now consider setting the diffusion process length T to the dimensionality of the data, defining the forward process so that it masks out coordinates one by one, setting the prior to place all mass on a blank image, and taking the reverse process to be a fully expressive conditional distribution. With these choices, minimizing the KL divergence trains the model to predict the masked coordinate given the others. Thus, training with this particular diffusion is training an autoregressive model. We can therefore interpret the Gaussian diffusion model as a kind of autoregressive model with a generalized bit ordering that cannot be expressed by reordering data coordinates.
long_en_213
news_en
960
en
A serving of giraffe legs with a side order of spicy sea urchin: the diet of 'chav' Romans in Pompeii revealed. Archaeologists led by the University of Cincinnati said the discoveries of exotic meats prove the richness, variety and range of a non-elite diet. The international team spent more than a decade analysing the homes, shops and businesses of a non-elite district in Pompeii. Inexpensive foods such as grains, fruits, nuts, olives, local fish, small cuts of meat and eggs were also found in latrines and drains in the area. Lower- and middle-class Romans living in Pompeii feasted on exotic meats and spicy seafood before the city was struck by a devastating volcanic eruption in 79 AD. Archaeologists have disproved popular preconceptions that the rich alone dined on imported delicacies, including flamingo, while the poor survived on gruel; instead, all classes enjoyed a rich and varied diet. The researchers discovered sea urchin and the butchered leg of a giraffe among less exotic foods like grain and eggs in a poorer area of the ruined city. Steven Ellis, an associate professor of classics at the University of Cincinnati, led the search through drains and latrines of a non-elite area of Pompeii to find bones and food deposits and build a picture of lower- and middle-class Romans' eating habits. A rare kiln is pictured right. What did Pompeians eat? Inexpensive and widely available foods such as grains, fruits, nuts, olives, lentils, local fish, and chicken eggs were found in drains and latrines in a relatively poor area of Pompeii, along with small cuts of meat and salted fish from Spain. A drain from a central property in the district revealed a rich variety of foods and imports from outside Italy, such as shellfish, sea urchin, and even delicacies including a butchered giraffe leg joint. Archaeologists said their finds demonstrate the richness, variety, and range of a non-elite diet, as well as long-distance trade in exotic and wild animals at the time. The international team spent more than a decade analysing the homes, shops, and businesses of a lower- to middle-class district in the Roman city, where earlier buildings date back to the 6th century. The area covers 10 separate building plots and a total of 20 shop fronts, most of which served food and drink. Archaeologists examined waste in drains as well as ancient latrines and cesspits to uncover charred food waste from kitchens, including fully processed foods like grain, as well as human waste. Steven Ellis, an associate professor of classics at the University of Cincinnati, said, "The material from the drains revealed a range and quantity of materials that suggest a clear socio-economic distinction between the activities and consumption habits of each property, which were otherwise indistinguishable as hospitality businesses." Inexpensive and widely available foods such as grains, fruits, nuts, olives, lentils, local fish and chicken eggs were found, along with small cuts of meat and salted fish from Spain. Archaeologists have disproved views that rich Romans dined on exported delicacies while the poor survived on gruel, revealing that all classes of Pompeians had a rich and varied diet. An inn with holes that held food and wine, the thermopolium of Lucius Vetutius Placidus, was located in another area of the city. A drain from a central property revealed a rich variety of foods as well as imports from outside Italy, such as shellfish, sea urchin and even delicacies including the butchered leg joint of a giraffe. "That the bone represents the height of exotic food is underscored by the fact that this is thought to be the only giraffe bone ever recorded from an archaeological excavation in Roman Italy," Professor Ellis said. "How part of an animal, butchered, came to be a kitchen scrap in a seemingly standard Pompeian restaurant not only speaks to long-distance trade in exotic and wild animals, but also to the richness, variety, and range of a non-elite diet," Professor Ellis said. Deposits discovered also included imported spices from as far away as Indonesia, highlighting the incredible reach of the Romans. "The traditional vision of some mass of hapless lemmings, scrounging for whatever they can pinch from the side of a street, or huddled around a bowl of gruel, needs to be replaced by a higher fare and standard of living, at least for the urbanites in Pompeii," Professor Ellis said. While inexpensive food such as grains, fruit, nuts, and eggs were found, the archaeologists also uncovered rich foods including sea urchin and even the leg of a giraffe, which was found in the drain of a property in a relatively poor part of the Roman city. One of the deposits dates to the fourth century and is rare, since few other deposits survived the early stage in the development of Pompeii. Professor Ellis and his international team aim to reveal the structural and social relationships over time between working-class Pompeian households, as well as to work out the role that the middle classes played in shaping the city. "However, one of the larger datasets and themes of our research has been diet and the infrastructure of food consumption and food ways," he added. The research will be presented at the joint annual meeting of the Archaeological Institute of America (AIA) and the American Philological Association (APA) in Chicago this weekend. New findings suggest commoners in ancient Pompeii ate a varied diet, and wealthier residents sometimes even ate giraffe. Food remains recovered from Pompeii's drains reveal that middle- and lower-class residents consumed inexpensive but nutritious foods, while slightly wealthier citizens enjoyed delicacies. These results contradict the belief that the Roman elite dined on exotic fare while poor Romans subsisted on birdseed.
A serving of giraffe legs with a side order of spicy sea urchin: the diet of 'chav' Romans in Pompeii revealed. Archaeologists led by the University of Cincinnati said the discoveries of exotic meats prove the richness, variety and range of a non-elite diet. The international team spent more than a decade analysing the homes, shops and businesses of a non-elite district in Pompeii. Inexpensive foods such as grains, fruits, nuts, olives, local fish, small cuts of meat and eggs were also found in latrines and drains in the area. Lower- and middle-class Romans living in Pompeii feasted on exotic meats and spicy seafood before the city was struck by a devastating volcanic eruption in seventy nine AD. Archaeologists have disproved popular preconceptions that the rich alone dined on imported delicacies, including flamingo, while the poor survived on gruel; instead, all classes enjoyed a rich and varied diet. The researchers discovered sea urchin and the butchered leg of a giraffe among less exotic foods like grain and eggs in a poorer area of the ruined city. Steven Ellis, an associate professor of classics at the University of Cincinnati, led the search through drains and latrines of a non-elite area of Pompeii to find bones and food deposits and build a picture of lower- and middle-class Romans' eating habits. A rare kiln is pictured right. What did Pompeians eat? Inexpensive and widely available foods such as grains, fruits, nuts, olives, lentils, local fish, and chicken eggs were found in drains and latrines in a relatively poor area of Pompeii, along with small cuts of meat and salted fish from Spain. A drain from a central property in the district revealed a rich variety of foods and imports from outside Italy, such as shellfish, sea urchin, and even delicacies including a butchered giraffe leg joint. Archaeologists said their finds demonstrate the richness, variety, and range of a non-elite diet, as well as long-distance trade in exotic and wild animals at the time. The international team spent more than a decade analysing the homes, shops, and businesses of a lower- to middle-class district in the Roman city, where earlier buildings date back to the sixth century. The area covers ten separate building plots and a total of twenty shop fronts, most of which served food and drink. Archaeologists examined waste in drains as well as ancient latrines and cesspits to uncover charred food waste from kitchens, including fully processed foods like grain, as well as human waste. Steven Ellis, an associate professor of classics at the University of Cincinnati, said, "The material from the drains revealed a range and quantity of materials that suggest a clear socio-economic distinction between the activities and consumption habits of each property, which were otherwise indistinguishable as hospitality businesses." Inexpensive and widely available foods such as grains, fruits, nuts, olives, lentils, local fish and chicken eggs were found, along with small cuts of meat and salted fish from Spain. Archaeologists have disproved views that rich Romans dined on exported delicacies while the poor survived on gruel, revealing that all classes of Pompeians had a rich and varied diet. An inn with holes that held food and wine, the thermopolium of Lucius Vetutius Placidus, was located in another area of the city. A drain from a central property revealed a rich variety of foods as well as imports from outside Italy, such as shellfish, sea urchin and even delicacies including the butchered leg joint of a giraffe. "That the bone represents the height of exotic food is underscored by the fact that this is thought to be the only giraffe bone ever recorded from an archaeological excavation in Roman Italy," Professor Ellis said. "How part of an animal, butchered, came to be a kitchen scrap in a seemingly standard Pompeian restaurant not only speaks to long-distance trade in exotic and wild animals, but also to the richness, variety, and range of a non-elite diet," Professor Ellis said. Deposits discovered also included imported spices from as far away as Indonesia, highlighting the incredible reach of the Romans. "The traditional vision of some mass of hapless lemmings, scrounging for whatever they can pinch from the side of a street, or huddled around a bowl of gruel, needs to be replaced by a higher fare and standard of living, at least for the urbanites in Pompeii," Professor Ellis said. While inexpensive food such as grains, fruit, nuts, and eggs were found, the archaeologists also uncovered rich foods including sea urchin and even the leg of a giraffe, which was found in the drain of a property in a relatively poor part of the Roman city. One of the deposits dates to the fourth century and is rare, since few other deposits survived the early stage in the development of Pompeii. Professor Ellis and his international team aim to reveal the structural and social relationships over time between working-class Pompeian households, as well as to work out the role that the middle classes played in shaping the city. "However, one of the larger datasets and themes of our research has been diet and the infrastructure of food consumption and food ways," he added. The research will be presented at the joint annual meeting of the Archaeological Institute of America (AIA) and the American Philological Association (APA) in Chicago this weekend. New findings suggest commoners in ancient Pompeii ate a varied diet, and wealthier residents sometimes even ate giraffe. Food remains recovered from Pompeii's drains reveal that middle- and lower-class residents consumed inexpensive but nutritious foods, while slightly wealthier citizens enjoyed delicacies. These results contradict the belief that the Roman elite dined on exotic fare while poor Romans subsisted on birdseed.
long_en_177
paper_en
1,237
en
We compare our method with previous leading approaches across several settings, including normal and high resolution, and also consider private models. At normal resolution, Mini-Gemini consistently outperforms existing models across a wide range of LLMs. In the efficient model category, Mini-Gemini, when configured with Gemma-2B, demonstrates superior performance compared to the efficient MobileVLM and even surpasses InstructBLIP equipped with Vicuna-7B and even 13B. The scalability of Mini-Gemini is evident when larger LLMs are employed. Given the same LLM, the proposed Mini-Gemini is validated to surpass LLaVA-1.5 with a large margin across all benchmarks. Notably, with the Hermes-2-Yi-34B LLM, Mini-Gemini achieves exceptional results, outpacing high-resource private models like Qwen-VL-Plus and Gemini Pro in some challenging benchmarks like MMMU and MMB. High Resolution. To validate the framework for extended visual tokens, we perform experiments with an input size of 672 for LR visual encoder and 1536 for HR visual encoder. As discussed above, the HR visual encoder primarily serves to offer high-resolution candidate information. Importantly, despite the increased resolution, the effective number of visual tokens processed by the LLM remains consistent with the LR input size of 672, ensuring computational efficiency. The benefits of this approach are particularly evident in detail-oriented tasks. For example, in the TextVQA benchmark, our method achieved a high performance rate with the Hermes-2-Yi-34B configuration, closely matching the performance of the well-established Gemini Pro. The results show that Mini-Gemini excels in more challenging benchmarks as well. For instance, the proposed method is on par with Qwen-VL-Plus on the MathVista and MMMU benchmark and even surpasses Gemini Pro and GPT-4V on the widely-adopted MMB benchmark. Subsection 4.3. Component-wise Analysis Patch Info Mining. We first delve into the proposed patch info mining. It is clear that the model achieves significant gains with the ConvNeXt-L integrated as the vision encoder for HR images. For example, when the LR and HR are respectively set to 224 and 512, the model shows a significant increase in performance on the TextVQA and MME datasets. Elevating the HR resolution to 768 further widens the performance margin compared to the baseline. These results underscore the substantial impact of patch info mining in harnessing more detailed visual cues. When we further extend the LR resolution to 336, patch info mining still contributes consistent gains. For instance, with the default ConvNeXt-L as vision encoder, it surpasses the baseline across the TextVQA, MME, and MM-Vet datasets. This proves the capability of designed modules with input resolution scaled up. Vision Encoder. To investigate the effect brought by mining candidates, we conduct experiments with various HR vision encoders. Compared with the default ConvNeXt-L, we add two encoders for contrast trials, i.e., ConvNeXt-B, and ConvNeXt-XXL. With the basic ConvNeXt-B, the model performs better in TextVQA and MM-Vet. However, the ConvNeXt-L encoder consistently delivers peak results, especially in the MME and MM-Vet datasets, indicating a superior balance in handling detailed visual information. We can conclude that a larger vision encoder for HR images contributes more to the candidate quality, but the model converges with a too large encoder like ConvNeXt-XXL. Hence, considering the balance between effectiveness and computational efficiency, ConvNeXt-L is chosen as the default HR vision encoder. This decision is based on its ability to provide high-quality visual information mining while maintaining reasonable computational demands, as evidenced by the comparative performance across the benchmarks. High-quality Data. In this era, the significance of high-quality data for enhancing the capabilities of LLMs and VLMs cannot be overstated. In our comprehensive analysis of data combination effects, we begin with a baseline model incorporating patch info mining. The integration of high-quality captions from ShareGPT4V yields improved visual alignment and performance gains. We validate the zero-shot performance on the TextVQA benchmark, notably removing TextCaps data from the training set in line with previous studies. This modification led to a notable performance decrease, underscoring the value of specific data types in training. To counteract this decline, we incorporate additional high-quality captions from LAION-GPT-4V and OCR-specific data, thus enhancing the model's OCR reasoning capabilities. As elaborated earlier, we utilize generation-related instructions to expand the application. It is interesting to find that such data also benefits the image understanding ability and brings gains in the MM-Vet dataset. Moreover, with the high-quality GPT4V responses from the ALLaVA dataset, the framework pushes the baseline significantly higher in the TextVQA and MM-Vet datasets. This comprehensive evaluation underscores the pivotal role of strategic high-quality data integration in amplifying the potential of the Mini-Gemini framework. Visual Token Extension. The proposed patch info mining is adeptly designed to accommodate extended visual tokens, thereby generalizing its utility across different input resolutions. We validate the effectiveness of the token extension. When increasing LR and HR input resolution, the model achieves significant gain in all benchmarks. Notably, in detail-oriented tasks such as TextVQA, we observe a performance uplift, indicating a significant enhancement in the model's ability to handle complex visual data. Our empirical observations suggest that the increase in resolution significantly diminishes visual hallucinations, leading to more accurate and reliable image comprehension. Generally, with the increased visual token number, Mini-Gemini can be scaled up towards better capability. We can also draw the same conclusion from the high-resolution results. Subsection 4.4. Qualitative Results Visual Understanding. To ascertain the visual comprehension prowess of Mini-Gemini in real-world settings, we apply it to a variety of understanding and reasoning tasks. Thanks to the patch info mining and high-quality data, Mini-Gemini can well solve several complex cases. For example, it is capable of recognizing plotted curves in graphical data and directly translating them into Python code for immediate application. Beyond mere recognition, it exhibits a keen attention to detail, accurately describing intricate elements within complex indoor scenes, and demonstrating a nuanced understanding of character associations in memes. Moreover, Mini-Gemini's analytical capabilities extend to chart analysis and practical problem-solving, such as intelligence tests. Image Generation. We provide a comprehensive evaluation of Mini-Gemini's generation capabilities. Compared with recent studies such as AnyGPT and ChatIllusion, our stronger multi-modal understanding ability allows us to generate text-to-image captions that better align with the given instructions, resulting in more contextually appropriate image-text answers. A noteworthy point is its proficiency in generating high-quality content based on multi-modal human instructions, with text-only training data. This capability underscores Mini-Gemini's robust image-text alignment and semantic interpretation skills, which come into play effectively in the inference stage. By leveraging the powerful reasoning ability of the LLM, it can produce reasonable image-text outputs in single or multi-round conversations. Section 5. Conclusion and Discussion We presented Mini-Gemini, a streamlined and potent framework for multi-modality VLMs. The essence of Mini-Gemini is to harness the latent capabilities of VLMs through strategic framework design, enriched data quality, and expanded functional scope. At its core, patch info mining enables efficient extraction of detailed visual cues by engaging with high-resolution candidates. From the data perspective, our meticulously compiled high-quality dataset ensures accurate vision-language alignment and bolsters strong instruction-following ability.
We compare our method with previous leading approaches across several settings, including normal and high resolution, and also consider private models. At normal resolution, Mini-Gemini consistently outperforms existing models across a wide range of LLMs. In the efficient model category, Mini-Gemini, when configured with Gemma two B, demonstrates superior performance compared to the efficient MobileVLM and even surpasses InstructBLIP equipped with Vicuna seven B and even thirteen B. The scalability of Mini-Gemini is evident when larger LLMs are employed. Given the same LLM, the proposed Mini-Gemini is validated to surpass LLaVA one point five with a large margin across all benchmarks. Notably, with the Hermes two Yi thirty four B LLM, Mini-Gemini achieves exceptional results, outpacing high-resource private models like Qwen-VL-Plus and Gemini Pro in some challenging benchmarks like MMMU and MMB. High Resolution. To validate the framework for extended visual tokens, we perform experiments with an input size of six hundred seventy two for LR visual encoder and one thousand five hundred thirty six for HR visual encoder. As discussed above, the HR visual encoder primarily serves to offer high-resolution candidate information. Importantly, despite the increased resolution, the effective number of visual tokens processed by the LLM remains consistent with the LR input size of six hundred seventy two, ensuring computational efficiency. The benefits of this approach are particularly evident in detail-oriented tasks. For example, in the TextVQA benchmark, our method achieved a high performance rate with the Hermes two Yi thirty four B configuration, closely matching the performance of the well-established Gemini Pro. The results show that Mini-Gemini excels in more challenging benchmarks as well. For instance, the proposed method is on par with Qwen-VL-Plus on the MathVista and MMMU benchmark and even surpasses Gemini Pro and GPT four V on the widely-adopted MMB benchmark. Subsection four point three. Component-wise Analysis Patch Info Mining. We first delve into the proposed patch info mining. It is clear that the model achieves significant gains with the ConvNeXt L integrated as the vision encoder for HR images. For example, when the LR and HR are respectively set to two hundred twenty four and five hundred twelve, the model shows a significant increase in performance on the TextVQA and MME datasets. Elevating the HR resolution to seven hundred sixty eight further widens the performance margin compared to the baseline. These results underscore the substantial impact of patch info mining in harnessing more detailed visual cues. When we further extend the LR resolution to three hundred thirty six, patch info mining still contributes consistent gains. For instance, with the default ConvNeXt L as vision encoder, it surpasses the baseline across the TextVQA, MME, and MM-Vet datasets. This proves the capability of designed modules with input resolution scaled up. Vision Encoder. To investigate the effect brought by mining candidates, we conduct experiments with various HR vision encoders. Compared with the default ConvNeXt-L, we add two encoders for contrast trials, i.e., ConvNeXt-B, and ConvNeXt-XXL. With the basic ConvNeXt-B, the model performs better in TextVQA and MM-Vet. However, the ConvNeXt-L encoder consistently delivers peak results, especially in the MME and MM-Vet datasets, indicating a superior balance in handling detailed visual information. We can conclude that a larger vision encoder for HR images contributes more to the candidate quality, but the model converges with a too large encoder like ConvNeXt-XXL. Hence, considering the balance between effectiveness and computational efficiency, ConvNeXt-L is chosen as the default HR vision encoder. This decision is based on its ability to provide high-quality visual information mining while maintaining reasonable computational demands, as evidenced by the comparative performance across the benchmarks. High-quality Data. In this era, the significance of high-quality data for enhancing the capabilities of LLMs and VLMs cannot be overstated. In our comprehensive analysis of data combination effects, we begin with a baseline model incorporating patch info mining. The integration of high-quality captions from ShareGPT four V yields improved visual alignment and performance gains. We validate the zero-shot performance on the TextVQA benchmark, notably removing TextCaps data from the training set in line with previous studies. This modification led to a notable performance decrease, underscoring the value of specific data types in training. To counteract this decline, we incorporate additional high-quality captions from LAION GPT four V and OCR specific data, thus enhancing the model's OCR reasoning capabilities. As elaborated earlier, we utilize generation related instructions to expand the application. It is interesting to find that such data also benefits the image understanding ability and brings gains in the MM Vet dataset. Moreover, with the high-quality GPT four V responses from the ALLaVA dataset, the framework pushes the baseline significantly higher in the TextVQA and MM Vet datasets. This comprehensive evaluation underscores the pivotal role of strategic high-quality data integration in amplifying the potential of the Mini Gemini framework. Visual Token Extension. The proposed patch info mining is adeptly designed to accommodate extended visual tokens, thereby generalizing its utility across different input resolutions. We validate the effectiveness of the token extension. When increasing LR and HR input resolution, the model achieves significant gain in all benchmarks. Notably, in detail oriented tasks such as TextVQA, we observe a performance uplift, indicating a significant enhancement in the model's ability to handle complex visual data. Our empirical observations suggest that the increase in resolution significantly diminishes visual hallucinations, leading to more accurate and reliable image comprehension. Generally, with the increased visual token number, Mini-Gemini can be scaled up towards better capability. We can also draw the same conclusion from the high-resolution results. Subsection four point four. Qualitative Results Visual Understanding. To ascertain the visual comprehension prowess of Mini-Gemini in real-world settings, we apply it to a variety of understanding and reasoning tasks. Thanks to the patch info mining and high-quality data, Mini-Gemini can well solve several complex cases. For example, it is capable of recognizing plotted curves in graphical data and directly translating them into Python code for immediate application. Beyond mere recognition, it exhibits a keen attention to detail, accurately describing intricate elements within complex indoor scenes, and demonstrating a nuanced understanding of character associations in memes. Moreover, Mini-Gemini's analytical capabilities extend to chart analysis and practical problem-solving, such as intelligence tests. Image Generation. We provide a comprehensive evaluation of Mini-Gemini's generation capabilities. Compared with recent studies such as AnyGPT and ChatIllusion, our stronger multi-modal understanding ability allows us to generate text-to-image captions that better align with the given instructions, resulting in more contextually appropriate image-text answers. A noteworthy point is its proficiency in generating high-quality content based on multi-modal human instructions, with text-only training data. This capability underscores Mini-Gemini's robust image-text alignment and semantic interpretation skills, which come into play effectively in the inference stage. By leveraging the powerful reasoning ability of the LLM, it can produce reasonable image-text outputs in single or multi-round conversations. Section five. Conclusion and Discussion We presented Mini-Gemini, a streamlined and potent framework for multi-modality VLMs. The essence of Mini-Gemini is to harness the latent capabilities of VLMs through strategic framework design, enriched data quality, and expanded functional scope. At its core, patch info mining enables efficient extraction of detailed visual cues by engaging with high-resolution candidates. From the data perspective, our meticulously compiled high-quality dataset ensures accurate vision-language alignment and bolsters strong instruction-following ability.
long_en_208
news_en
666
en
March 14, 2018 — Birth defect rate pegged at 7 percent for babies born to Zika-infected women By Gene Emery, Reuters Health A pregnant woman who becomes ill from the Zika virus faces a 7 percent chance that her child will be born with birth defects, and that risk jumps to nearly 13 percent if she becomes ill during the first trimester, a new study conducted in French territories in the Americas has concluded. The finding “emphasizes the serious global health threat to pregnant women and their infants posed by congenital (Zika virus) infection,” said Dr. Margaret Honein of the Centers for Disease Control and Prevention in Atlanta in an editorial in the New England Journal of Medicine, where the study appears. The estimates do not include less-obvious developmental problems that may surface later in life. In addition, because the study only included women who fell ill, it does not include the risk for women who may have been infected but did not experience symptoms, which happens in about 80 percent of cases. “Other studies showed earlier that the risk of birth defects did not depend on the presence or the severity of (Zika)-related symptoms,” chief author Dr. Bruno Hoen of the University Medical Center of Guadeloupe told Reuters Health by email. Carl Fichtenbaum, a professor in the Division of Infectious Diseases at the University of Cincinnati College of Medicine in Ohio who was not involved in the research, said the birth defect rate might differ in other populations depending on whether it’s a first or second Zika infection. "In populations where it’s just been introduced it might be more severe for the fetus, but time will tell," he said. The 527 surviving babies in the new study will be followed for at least two years. Only longer-term follow-up will help identify the full spectrum of Zika-related complications, the Hoen team said. The virus was linked to birth defects, particularly microcephaly, during the 2016–2017 outbreak. Zika vaccines are still being tested. Previous estimates of congenital neurologic defects in infected newborns ranged from 6% to 42%. The Hoen team included women who were pregnant from March through November 2016 and had confirmed Zika infection. Their study of 555 fetuses and infants found evidence of microcephaly in 5.8% and severe microcephaly in 1.6%. The earlier the infection in pregnancy, the greater the risk: neurologic or eye problems were seen in 12.7% of babies whose mothers were infected during the first trimester. The rates were 3.6 percent when mothers were infected during the second trimester and 5.3 percent during the third. "This is some of the most compelling data to date that the risk of brain abnormalities, microcephaly, and eye anomalies extends to infections in every trimester of pregnancy," Honein wrote in her editorial. The French territories were French Guiana, Guadeloupe, and Martinique. A Zika registry in the United States has found birth defects in 10 percent of women known to have been infected with the virus; when the infection occurred during the first trimester, the rate was 15 percent. But those studies "do not provide information on the estimated 80 percent of pregnant women with Zika infections who have no reported symptoms," Honein wrote. "Population-level increases in Zika-associated birth defects are unlikely to be recognized without ongoing, timely, and comprehensive surveillance of birth defects that captures all affected fetuses and infants regardless of whether maternal Zika exposure or infection was identified." Zika "should definitely be added to the list of infectious agents that can cause severe birth defects, as are rubella virus, cytomegalovirus, and others," Honein said in her email. The study also emphasizes the urgent need to protect women who may become pregnant while traveling or living in areas with Zika-infected mosquitoes, to test women at risk for Zika when they become pregnant, and to develop an effective vaccine, she said. The New England Journal of Medicine, online March 14, 2018.
March fourteen, two thousand eighteen — Birth defect rate pegged at seven percent for babies born to Zika-infected women By Gene Emery, Reuters Health A pregnant woman who becomes ill from the Zika virus faces a seven percent chance that her child will be born with birth defects, and that risk jumps to nearly thirteen percent if she becomes ill during the first trimester, a new study conducted in French territories in the Americas has concluded. The finding “emphasizes the serious global health threat to pregnant women and their infants posed by congenital (Zika virus) infection,” said Dr. Margaret Honein of the Centers for Disease Control and Prevention in Atlanta in an editorial in the New England Journal of Medicine, where the study appears. The estimates do not include less-obvious developmental problems that may surface later in life. In addition, because the study only included women who fell ill, it does not include the risk for women who may have been infected but did not experience symptoms, which happens in about eighty percent of cases. “Other studies showed earlier that the risk of birth defects did not depend on the presence or the severity of (Zika)-related symptoms,” chief author Dr. Bruno Hoen of the University Medical Center of Guadeloupe told Reuters Health by email. Carl Fichtenbaum, a professor in the Division of Infectious Diseases at the University of Cincinnati College of Medicine in Ohio who was not involved in the research, said the birth defect rate might differ in other populations depending on whether it’s a first or second Zika infection. "In populations where it’s just been introduced it might be more severe for the fetus, but time will tell," he said. The five hundred twenty seven surviving babies in the new study will be followed for at least two years. Only longer-term follow-up will help identify the full spectrum of Zika-related complications, the Hoen team said. The virus was linked to birth defects, particularly microcephaly, during the two thousand sixteen–two thousand seventeen outbreak. Zika vaccines are still being tested. Previous estimates of congenital neurologic defects in infected newborns ranged from six percent to forty two percent. The Hoen team included women who were pregnant from March through November two thousand sixteen and had confirmed Zika infection. Their study of five hundred fifty five fetuses and infants found evidence of microcephaly in five point eight percent and severe microcephaly in one point six percent. The earlier the infection in pregnancy, the greater the risk: neurologic or eye problems were seen in twelve point seven percent of babies whose mothers were infected during the first trimester. The rates were three point six percent when mothers were infected during the second trimester and five point three percent during the third. "This is some of the most compelling data to date that the risk of brain abnormalities, microcephaly, and eye anomalies extends to infections in every trimester of pregnancy," Honein wrote in her editorial. The French territories were French Guiana, Guadeloupe, and Martinique. A Zika registry in the United States has found birth defects in ten percent of women known to have been infected with the virus; when the infection occurred during the first trimester, the rate was fifteen percent. But those studies "do not provide information on the estimated eighty percent of pregnant women with Zika infections who have no reported symptoms," Honein wrote. "Population-level increases in Zika-associated birth defects are unlikely to be recognized without ongoing, timely, and comprehensive surveillance of birth defects that captures all affected fetuses and infants regardless of whether maternal Zika exposure or infection was identified." Zika "should definitely be added to the list of infectious agents that can cause severe birth defects, as are rubella virus, cytomegalovirus, and others," Honein said in her email. The study also emphasizes the urgent need to protect women who may become pregnant while traveling or living in areas with Zika-infected mosquitoes, to test women at risk for Zika when they become pregnant, and to develop an effective vaccine, she said. The New England Journal of Medicine, online March fourteen, two thousand eighteen.
long_en_262
wiki_en
881
en
Time-domain astronomy is the study of how astronomical objects change with time. Although the study may be said to begin with Galileo's Letters on Sunspots, the term now refers especially to variable objects beyond the Solar System. Changes over time may be due to motion or intrinsic changes in the object itself. Common targets include supernovae, pulsating stars, novae, flare stars, blazars, and active galactic nuclei. Visible-light time-domain surveys include OGLE, HAT-South, Pan-STARRS, SkyMapper, ASAS, WASP, CRTS, and, in the near future, the LSST at the Vera C. Rubin Observatory. Time-domain astronomy studies transient astronomical events (often shortened to "transients") as well as various types of variable stars, including periodic, quasi-periodic, and those that change behavior or type. Other causes of time variability are asteroids, high proper-motion stars, planetary transits, and comets. Transients are astronomical events whose observable durations range from milliseconds to days, weeks, or even several years, in contrast to the millions or billions of years over which galaxies and their component stars evolve. More specifically, the term often refers to violent deep-sky events such as supernovae, novae, dwarf-nova outbursts, gamma-ray bursts, tidal disruption events, and gravitational microlensing. Time-domain astronomy involves long-term studies of variable stars and their changes on timescales from minutes to decades. Variability can be intrinsic—such as periodic or semi-regular pulsations, young stellar objects, stars with outbursts, and asteroseismology—or extrinsic, resulting from eclipses (in binary stars or planetary transits), stellar rotation (in pulsars and spotted stars), or gravitational microlensing. Modern time-domain surveys often use robotic telescopes, automated classification of transient events, and rapid notifications for interested researchers. Blink comparators have long been used to detect differences between photographic plates, and image subtraction became more widely used once digital imaging made it easier to normalize image pairs. Because of the large fields of view required, time-domain work involves storing and transferring vast amounts of data, requiring data mining, automated classification, and the handling of heterogeneous datasets. The importance of time-domain astronomy has been recognized by professional societies. Andrzej Udalski was recognized for his "pioneering contribution to the growth of a new field of astrophysics research, time-domain astronomy, which studies the variability of brightness and other parameters of objects in the universe on different timescales." The 2017 Dan David Prize was awarded to three leading researchers in the field of time-domain astronomy: Neil Gehrels (Swift Gamma-Ray Burst Mission), Shrinivas Kulkarni (Palomar Transient Factory), and Andrzej Udalski (Optical Gravitational Lensing Experiment). History: Before the invention of telescopes, transient events visible to the naked eye within or near the Milky Way were very rare and sometimes separated by hundreds of years. Such events were recorded in antiquity, for example the supernova of 1054 observed by Chinese, Japanese, and Arab astronomers, and the 1572 event known as "Tycho's Supernova," after Tycho Brahe, who studied it until it faded two years later. Although telescopes made it possible to see more distant events, their small fields of view—typically less than 1 square degree—meant the chances of looking in the right place at the right time were low. Schmidt cameras and other wide-field astrographs were invented in the 20th century, but were mostly used to survey the unchanging heavens. Historically, time-domain astronomy has included the appearance of comets and the variable brightness of Cepheid-type variable stars. Old astronomical plates exposed from the 1880s through the early 1990s at the Harvard College Observatory are being digitized by the DASCH project. Interest in transients increased with the availability of large CCD detectors to the astronomical community. In the 1990s, as telescopes with wider fields of view and larger detectors came into use, the first large, regular survey observations were initiated—pioneered by gravitational microlensing surveys such as the Optical Gravitational Lensing Experiment (OGLE) and the MACHO Project. Besides discovering microlensing events, these efforts increased by orders of magnitude the number of known variable stars. Subsequent dedicated sky surveys, including the Palomar Transient Factory, the Gaia spacecraft, and the Large Synoptic Survey Telescope (LSST), focused on extending sky monitoring to fainter objects, adding optical filters, and improving astrometry and proper-motion measurements. In 2022, the Gravitational-wave Optical Transient Observer (GOTO) began searching for optical counterparts to neutron star collisions. Modern instruments can observe wavelengths invisible to the human eye—radio, infrared, ultraviolet, and X-ray—greatly increasing the information available when studying transients. In radio astronomy, the Low-Frequency Array (LOFAR) is used to search for radio transients; radio time-domain studies have long included pulsars and scintillation. Projects that look for transients in X-rays and gamma rays include the Cherenkov Telescope Array, eROSITA, AGILE, Fermi, HAWC, INTEGRAL, MAXI, Swift (the Gamma-Ray Burst Mission), and the Space Variable Objects Monitor. Gamma-ray bursts are well-known, high-energy electromagnetic transients. The proposed ULTRASAT satellite will observe a field of more than 200 square degrees continuously in the ultraviolet, a band particularly important for detecting supernovae within minutes of their occurrence. See also: List of gamma-ray bursts; gravitational microlensing; list of gravitational wave observations; list of exoplanets detected by microlensing; X-ray transient; cataclysmic variable star; stellar pulsation; SIMBAD Astronomical Database; observational astronomy; astronomical events.
Time-domain astronomy is the study of how astronomical objects change with time. Although the study may be said to begin with Galileo's Letters on Sunspots, the term now refers especially to variable objects beyond the Solar System. Changes over time may be due to motion or intrinsic changes in the object itself. Common targets include supernovae, pulsating stars, novae, flare stars, blazars, and active galactic nuclei. Visible-light time-domain surveys include OGLE, HAT-South, Pan-STARRS, SkyMapper, ASAS, WASP, CRTS, and, in the near future, the LSST at the Vera C. Rubin Observatory. Time-domain astronomy studies transient astronomical events (often shortened to "transients") as well as various types of variable stars, including periodic, quasi-periodic, and those that change behavior or type. Other causes of time variability are asteroids, high proper-motion stars, planetary transits, and comets. Transients are astronomical events whose observable durations range from milliseconds to days, weeks, or even several years, in contrast to the millions or billions of years over which galaxies and their component stars evolve. More specifically, the term often refers to violent deep-sky events such as supernovae, novae, dwarf-nova outbursts, gamma-ray bursts, tidal disruption events, and gravitational microlensing. Time-domain astronomy involves long-term studies of variable stars and their changes on timescales from minutes to decades. Variability can be intrinsic—such as periodic or semi-regular pulsations, young stellar objects, stars with outbursts, and asteroseismology—or extrinsic, resulting from eclipses (in binary stars or planetary transits), stellar rotation (in pulsars and spotted stars), or gravitational microlensing. Modern time-domain surveys often use robotic telescopes, automated classification of transient events, and rapid notifications for interested researchers. Blink comparators have long been used to detect differences between photographic plates, and image subtraction became more widely used once digital imaging made it easier to normalize image pairs. Because of the large fields of view required, time-domain work involves storing and transferring vast amounts of data, requiring data mining, automated classification, and the handling of heterogeneous datasets. The importance of time-domain astronomy has been recognized by professional societies. Andrzej Udalski was recognized for his "pioneering contribution to the growth of a new field of astrophysics research, time-domain astronomy, which studies the variability of brightness and other parameters of objects in the universe on different timescales." The two thousand seventeen Dan David Prize was awarded to three leading researchers in the field of time-domain astronomy: Neil Gehrels (Swift Gamma-Ray Burst Mission), Shrinivas Kulkarni (Palomar Transient Factory), and Andrzej Udalski (Optical Gravitational Lensing Experiment). History: Before the invention of telescopes, transient events visible to the naked eye within or near the Milky Way were very rare and sometimes separated by hundreds of years. Such events were recorded in antiquity, for example the supernova of one thousand fifty four observed by Chinese, Japanese, and Arab astronomers, and the one thousand five hundred seventy two event known as "Tycho's Supernova," after Tycho Brahe, who studied it until it faded two years later. Although telescopes made it possible to see more distant events, their small fields of view—typically less than one square degree—meant the chances of looking in the right place at the right time were low. Schmidt cameras and other wide-field astrographs were invented in the twentieth century, but were mostly used to survey the unchanging heavens. Historically, time-domain astronomy has included the appearance of comets and the variable brightness of Cepheid-type variable stars. Old astronomical plates exposed from the eighteen eighties through the early nineteen nineties at the Harvard College Observatory are being digitized by the DASCH project. Interest in transients increased with the availability of large CCD detectors to the astronomical community. In the nineteen nineties, as telescopes with wider fields of view and larger detectors came into use, the first large, regular survey observations were initiated—pioneered by gravitational microlensing surveys such as the Optical Gravitational Lensing Experiment (OGLE) and the MACHO Project. Besides discovering microlensing events, these efforts increased by orders of magnitude the number of known variable stars. Subsequent dedicated sky surveys, including the Palomar Transient Factory, the Gaia spacecraft, and the Large Synoptic Survey Telescope (LSST), focused on extending sky monitoring to fainter objects, adding optical filters, and improving astrometry and proper-motion measurements. In two thousand twenty two, the Gravitational-wave Optical Transient Observer (GOTO) began searching for optical counterparts to neutron star collisions. Modern instruments can observe wavelengths invisible to the human eye—radio, infrared, ultraviolet, and X-ray—greatly increasing the information available when studying transients. In radio astronomy, the Low-Frequency Array (LOFAR) is used to search for radio transients; radio time-domain studies have long included pulsars and scintillation. Projects that look for transients in X-rays and gamma rays include the Cherenkov Telescope Array, eROSITA, AGILE, Fermi, HAWC, INTEGRAL, MAXI, Swift (the Gamma-Ray Burst Mission), and the Space Variable Objects Monitor. Gamma-ray bursts are well-known, high-energy electromagnetic transients. The proposed ULTRASAT satellite will observe a field of more than two hundred square degrees continuously in the ultraviolet, a band particularly important for detecting supernovae within minutes of their occurrence. See also: List of gamma-ray bursts; gravitational microlensing; list of gravitational wave observations; list of exoplanets detected by microlensing; X-ray transient; cataclysmic variable star; stellar pulsation; SIMBAD Astronomical Database; observational astronomy; astronomical events.
long_en_256
news_en
1,012
en
Last month, on her way out of her high-ranking Justice Department post, Rachel Brand disclosed a new DOJ priority at a luncheon speech to the Federalist Society. The Class Action Fairness Act of 2005 requires defendants to notify the Justice Department of proposed settlements. Because of a mailroom snafu, Brand said, DOJ hadn't been receiving notices in time to review the settlements before they were approved. But the systems had been fixed, Brand said, and the Justice Department was ready to stand up. "If a settlement isn't fair or reasonable under CAFA, DOJ may file a statement of interest saying so," Brand said. "Be on the lookout in the coming days for the first example." Sure enough, on the very next day, the Justice Department filed a statement of interest opposing final approval of a $10.8 million consumer class action in federal court in Camden, New Jersey. DOJ's filing marked the first time in more than a decade — and apparently only the third time since CAFA's enactment — that the Justice Department has followed up on a CAFA notice by urging a court to reject a settlement. Class counsel in the case, Cannon v. Ashburn, responded Monday to the Justice Department, a coalition of state attorneys general and a handful of objecting class members. The brief, from Carella Byrne Cecchi Olstein Brody & Agnello and Giskan Solotaroff Anderson, describes changes to the proposed settlement. The Justice Department and other objectors had complained the deal entitled class members — consumers who bought wine from the website Wines Til Sold Out — only to coupon-like "redemption codes" that discounted the price of future purchases from the site. The new proposal adds a $500,000 cash fund for class members who do not redeem their discounts. It also extends the period in which purchasers can use the discounts and, at the judge's suggestion, postpones consideration of the plaintiffs' lawyers' $1.7 million fee request until class members have received their recoveries. Most notably, the brief accuses the Justice Department of unjustified ideological meddling in a settlement that has proven popular with the allegedly deceived consumers who benefit from it. Class counsel said the DOJ has no business interfering in this case — or, for that matter, in any class action in which the United States does not have an interest. If, as Rachel Brand hinted, the Justice Department plans to step up policing of class-action settlements, the Wines Til Sold Out case could be an important test of the DOJ's authority. The class brief contends that CAFA provides the federal government a right to be notified about proposed class deals, but not to influence the settlement approval process. The brief said DOJ selectively cherry-picks language in CAFA's legislative history, which states that "the committee believes that notifying appropriate state and federal officials of proposed class action settlements will provide a check against inequitable settlements in these cases" and "will also deter collusion between class counsel and defendants to craft settlements that do not benefit the injured parties." But that language does not convey a right to object to settlements. The policy is to check abuses through the obligation to report, not through an unstated right to object. DOJ essentially asks that such a right to object be inferred, which is wrong. The brief cited several cases, including the BP Deepwater Horizon litigation, in which federal courts have held CAFA does not confer constitutional standing on state attorneys general who want to object to class settlements. The brief does not mention precedent on CAFA and DOJ's standing, but that omission is to be expected given the novelty of DOJ's filing in the wine case. Class counsel said the Justice Department cannot claim an interest in a case asserting a violation of New Jersey consumer laws, particularly because the federal government has failed to protect consumers from deceptively advertised discounts. The Justice Department argued that consumers weren’t injured by Wines Til Sold Out’s allegedly inflated reports of original prices because consumers ultimately got exactly what they purchased at the price they agreed to pay. Plaintiffs’ lawyers said DOJ ignored the “real, quantifiable economic value,” under New Jersey law, of ending a so‑called reference pricing scheme. “There are ... no ‘interests of the United States’ at stake in the application of state consumer fraud and common law, and DOJ identifies none,” the brief said. Class counsel accused the Justice Department of making an ideological statement instead of acting in the best interests of consumers, who have already claimed $3 million in benefits. The claims rate, according to the brief, is so far 15 percent—much higher than average rates in consumer cases. “DOJ’s statement appears to be based on little more than an ideological hostility to collective litigation,” the brief said. “This is not the forum for airing such grievances.” DOJ’s statement does not reasonably explain the government’s views of the law and the facts; instead, it presents a self‑serving, imagined set of facts to further an ideological crusade against class actions. In an interview, class counsel and former federal prosecutor James Cecchi of Carella Byrne said the Justice Department should have contacted him before using this case to launch its new class‑action activism. “They filed an objection without gaining an understanding of the facts or the case, as clearly reflected in the huge take rate to date,” he said. “The most important voice, other than the court, is the class, and the class here has said clearly and unambiguously that they want the relief provided by this settlement.” One potentially significant point: DOJ styled its filing in the wine class action not as an objection but as a statement of interest. Class counsel’s brief argues that under the Federal Rules of Civil Procedure, only class members can file objections to proposed settlements. “The rule and the case law thus doom DOJ’s ill‑considered venture here,” the brief asserts. But technically, the Justice Department isn’t appearing as an objector.
Last month, on her way out of her high-ranking Justice Department post, Rachel Brand disclosed a new DOJ priority at a luncheon speech to the Federalist Society. The Class Action Fairness Act of two thousand five requires defendants to notify the Justice Department of proposed settlements. Because of a mailroom snafu, Brand said, DOJ hadn't been receiving notices in time to review the settlements before they were approved. But the systems had been fixed, Brand said, and the Justice Department was ready to stand up. "If a settlement isn't fair or reasonable under CAFA, DOJ may file a statement of interest saying so," Brand said. "Be on the lookout in the coming days for the first example." Sure enough, on the very next day, the Justice Department filed a statement of interest opposing final approval of a ten point eight million dollars consumer class action in federal court in Camden, New Jersey. DOJ's filing marked the first time in more than a decade — and apparently only the third time since CAFA's enactment — that the Justice Department has followed up on a CAFA notice by urging a court to reject a settlement. Class counsel in the case, Cannon v. Ashburn, responded Monday to the Justice Department, a coalition of state attorneys general and a handful of objecting class members. The brief, from Carella Byrne Cecchi Olstein Brody & Agnello and Giskan Solotaroff Anderson, describes changes to the proposed settlement. The Justice Department and other objectors had complained the deal entitled class members — consumers who bought wine from the website Wines Til Sold Out — only to coupon-like "redemption codes" that discounted the price of future purchases from the site. The new proposal adds a five hundred thousand dollar cash fund for class members who do not redeem their discounts. It also extends the period in which purchasers can use the discounts and, at the judge's suggestion, postpones consideration of the plaintiffs' lawyers' one point seven million dollar fee request until class members have received their recoveries. Most notably, the brief accuses the Justice Department of unjustified ideological meddling in a settlement that has proven popular with the allegedly deceived consumers who benefit from it. Class counsel said the DOJ has no business interfering in this case — or, for that matter, in any class action in which the United States does not have an interest. If, as Rachel Brand hinted, the Justice Department plans to step up policing of class-action settlements, the Wines Til Sold Out case could be an important test of the DOJ's authority. The class brief contends that CAFA provides the federal government a right to be notified about proposed class deals, but not to influence the settlement approval process. The brief said DOJ selectively cherry-picks language in CAFA's legislative history, which states that "the committee believes that notifying appropriate state and federal officials of proposed class action settlements will provide a check against inequitable settlements in these cases" and "will also deter collusion between class counsel and defendants to craft settlements that do not benefit the injured parties." But that language does not convey a right to object to settlements. The policy is to check abuses through the obligation to report, not through an unstated right to object. DOJ essentially asks that such a right to object be inferred, which is wrong. The brief cited several cases, including the BP Deepwater Horizon litigation, in which federal courts have held CAFA does not confer constitutional standing on state attorneys general who want to object to class settlements. The brief does not mention precedent on CAFA and DOJ's standing, but that omission is to be expected given the novelty of DOJ's filing in the wine case. Class counsel said the Justice Department cannot claim an interest in a case asserting a violation of New Jersey consumer laws, particularly because the federal government has failed to protect consumers from deceptively advertised discounts. The Justice Department argued that consumers weren’t injured by Wines Til Sold Out’s allegedly inflated reports of original prices because consumers ultimately got exactly what they purchased at the price they agreed to pay. Plaintiffs’ lawyers said DOJ ignored the “real, quantifiable economic value,” under New Jersey law, of ending a so‑called reference pricing scheme. “There are ... no ‘interests of the United States’ at stake in the application of state consumer fraud and common law, and DOJ identifies none,” the brief said. Class counsel accused the Justice Department of making an ideological statement instead of acting in the best interests of consumers, who have already claimed dollar three million in benefits. The claims rate, according to the brief, is so far fifteen percent—much higher than average rates in consumer cases. “DOJ’s statement appears to be based on little more than an ideological hostility to collective litigation,” the brief said. “This is not the forum for airing such grievances.” DOJ’s statement does not reasonably explain the government’s views of the law and the facts; instead, it presents a self‑serving, imagined set of facts to further an ideological crusade against class actions. In an interview, class counsel and former federal prosecutor James Cecchi of Carella Byrne said the Justice Department should have contacted him before using this case to launch its new class‑action activism. “They filed an objection without gaining an understanding of the facts or the case, as clearly reflected in the huge take rate to date,” he said. “The most important voice, other than the court, is the class, and the class here has said clearly and unambiguously that they want the relief provided by this settlement.” One potentially significant point: DOJ styled its filing in the wine class action not as an objection but as a statement of interest. Class counsel’s brief argues that under the Federal Rules of Civil Procedure, only class members can file objections to proposed settlements. “The rule and the case law thus doom DOJ’s ill‑considered venture here,” the brief asserts. But technically, the Justice Department isn’t appearing as an objector.
long_en_275
wiki_en
718
en
Muscular evolution in humans refers to the adaptations of the muscular system from early ancestors to modern humans. Humans are predisposed to develop muscle mass because early humans depended on strength for hunting and survival. Although modern humans rely less on muscle for everyday survival, muscle development can be as rapid—or faster—thanks to improved training techniques and greater knowledge of anatomy. DNA and anthropological data indicate that modern humans (Homo sapiens) are primates and descendants of ape-like species. All species of the genus Homo are extinct except modern humans, who are thought to have evolved from australopithecine ancestors in East Africa. The development of modern humans has taken place over roughly 300,000 years, and unique adaptations resulted from the ecological pressures Homo sapiens faced. Largely due to ecological and behavioral factors, the human muscular system differs significantly from that of our early primate ancestors. As with other evolutionary changes, the muscular system evolved to increase survivability. Because muscles, ligaments, and tendons are distributed throughout the body and support many functions, our behavior and choices are shaped in part by our physical capabilities. It is believed that our ancestors' original habitat was not on the ground but in the trees. We developed new habits that eventually allowed us to thrive on the ground, such as changes in diet, food gathering, energy expenditure, social interactions, and responses to predators. Life in the canopy offered a food supply similar to that of herbivores: leaves, fruits, and berries—mostly low-protein foods that did not require much energy to find. However, when available, meat was also consumed. At that time our ancestors had not yet switched to full-time bipedalism, so searching for food on the ground was impractical because it involved too much energy and risk. The canopy also lacked the predators found on the ground, against which our chimp-like ancestors would have been poor defenders. As they became bipedal, they began to live in groups that used weapons to fend off predators and hunt prey; running became a key aspect of survival. Even so, it is the development of the brain that guided the evolution of human muscle function and structure. Skull, neck, and head structures reflect these changes. It is suspected that H. sapiens' ancestors did not initially forage on the forest floor; instead they migrated from the trees for various reasons. In that environment they survived on a diet high in plant matter, with some insects and small amounts of meat. Early human ancestors were not formidable compared with dominant mammals such as large cats, but improvements in hunting, gathering, and brain development allowed them to add high-calorie foods like meat to their diet. Analyses of jaws and skulls indicate they had larger, stronger jaw muscles and larger posterior molars, consistent with a diet that included substantial amounts of fruit and plants. Reliance on higher-calorie foods grew as bipedalism proved less efficient and climbing tall trees became more energetically costly. Early ancestors also had more muscles connecting the skull, neck, and shoulders—similar to apes—which gave their neck and skull a more downward appearance like that of non-human primates. Reduction of these muscles allowed the head to be held in a more upright position and enabled the occipitofrontalis (forehead) to play a greater role in facial expression. As humans became taller after adopting bipedalism, the back muscles around the base of the tailbone and hips lengthened and body mass increased, further reducing the ability to climb. Early human ancestors had a tail where the modern coccyx (tailbone) is located. This aided balance in the trees but lost importance as bipedalism was adopted. Arms became relatively shorter compared with the legs, better suited for carrying objects and manipulating tools rather than climbing and swinging. The Homo sapiens lineage developed an opposable thumb, enabling many new manual functions. Forearm muscles and their tendons transmit force to the hands and fingers, allowing fine manipulation and greater strength. Overall, upper-body muscles adapted to activities that concentrate force in the hands and arms, such as holding, throwing, lifting, carrying while running to escape danger, hunting, and constructing shelters. The shift to full-time bipedalism in our ancestors is the primary reason for adaptations in muscle structure and function, especially in the lower body.
Muscular evolution in humans refers to the adaptations of the muscular system from early ancestors to modern humans. Humans are predisposed to develop muscle mass because early humans depended on strength for hunting and survival. Although modern humans rely less on muscle for everyday survival, muscle development can be as rapid—or faster—thanks to improved training techniques and greater knowledge of anatomy. DNA and anthropological data indicate that modern humans (Homo sapiens) are primates and descendants of ape-like species. All species of the genus Homo are extinct except modern humans, who are thought to have evolved from australopithecine ancestors in East Africa. The development of modern humans has taken place over roughly three hundred thousand years, and unique adaptations resulted from the ecological pressures Homo sapiens faced. Largely due to ecological and behavioral factors, the human muscular system differs significantly from that of our early primate ancestors. As with other evolutionary changes, the muscular system evolved to increase survivability. Because muscles, ligaments, and tendons are distributed throughout the body and support many functions, our behavior and choices are shaped in part by our physical capabilities. It is believed that our ancestors' original habitat was not on the ground but in the trees. We developed new habits that eventually allowed us to thrive on the ground, such as changes in diet, food gathering, energy expenditure, social interactions, and responses to predators. Life in the canopy offered a food supply similar to that of herbivores: leaves, fruits, and berries—mostly low-protein foods that did not require much energy to find. However, when available, meat was also consumed. At that time our ancestors had not yet switched to full-time bipedalism, so searching for food on the ground was impractical because it involved too much energy and risk. The canopy also lacked the predators found on the ground, against which our chimp-like ancestors would have been poor defenders. As they became bipedal, they began to live in groups that used weapons to fend off predators and hunt prey; running became a key aspect of survival. Even so, it is the development of the brain that guided the evolution of human muscle function and structure. Skull, neck, and head structures reflect these changes. It is suspected that H. sapiens' ancestors did not initially forage on the forest floor; instead they migrated from the trees for various reasons. In that environment they survived on a diet high in plant matter, with some insects and small amounts of meat. Early human ancestors were not formidable compared with dominant mammals such as large cats, but improvements in hunting, gathering, and brain development allowed them to add high-calorie foods like meat to their diet. Analyses of jaws and skulls indicate they had larger, stronger jaw muscles and larger posterior molars, consistent with a diet that included substantial amounts of fruit and plants. Reliance on higher-calorie foods grew as bipedalism proved less efficient and climbing tall trees became more energetically costly. Early ancestors also had more muscles connecting the skull, neck, and shoulders—similar to apes—which gave their neck and skull a more downward appearance like that of non-human primates. Reduction of these muscles allowed the head to be held in a more upright position and enabled the occipitofrontalis (forehead) to play a greater role in facial expression. As humans became taller after adopting bipedalism, the back muscles around the base of the tailbone and hips lengthened and body mass increased, further reducing the ability to climb. Early human ancestors had a tail where the modern coccyx (tailbone) is located. This aided balance in the trees but lost importance as bipedalism was adopted. Arms became relatively shorter compared with the legs, better suited for carrying objects and manipulating tools rather than climbing and swinging. The Homo sapiens lineage developed an opposable thumb, enabling many new manual functions. Forearm muscles and their tendons transmit force to the hands and fingers, allowing fine manipulation and greater strength. Overall, upper-body muscles adapted to activities that concentrate force in the hands and arms, such as holding, throwing, lifting, carrying while running to escape danger, hunting, and constructing shelters. The shift to full-time bipedalism in our ancestors is the primary reason for adaptations in muscle structure and function, especially in the lower body.
long_en_241
news_en
866
en
Weight Watchers International, Inc.'s stock got another boost from Oprah Winfrey's weight loss; she is the company's backer and most famous customer. The shares climbed as much as 19 percent after Winfrey announced that she'd lost 40 pounds using the program. The diet company, whose shares had fallen 54 percent this year before Thursday's surge, is unveiling new TV ads featuring Winfrey that tout her weight loss. The media magnate and talk-show veteran became a centerpiece of Weight Watchers' comeback plan last year when she bought a stake in the company and joined the board. In October 2015, when Weight Watchers first announced its deal with Winfrey, the shares more than doubled in a single day. Although that rally faded, Winfrey's subsequent endorsements have given the stock temporary jolts. The shares gained 27 percent over two days last December when Oprah tweeted a video about using the weight-loss program, and there was a 20 percent bump in January after Winfrey said she had lost 26 pounds while still eating bread every day. The latest marketing push begins next week, just before the New Year's holiday. The coming weeks are critical for Weight Watchers: the company typically adds about 40 percent of new customers in the first quarter, when resolutions push people to seek out diets. The stock climbed as high as $12.50 on Thursday, its biggest intraday gain since February. Heavy Competition: Weight Watchers has been hit hard by the rise of free fitness apps and a move away from strict calorie counting among dieters. But Winfrey’s wide-ranging influence has brought new life to the brand. Weight Watchers has added subscribers for three straight quarters after years of declines. The New York-based company hopes that Winfrey’s new ads, which emphasize that she still enjoys pasta and tacos, will convince dieters that its updated program is an effective weight-loss tool. “It’s vitally important that Oprah is living the program and articulating it in a way that’s authentic,” said Maurice Herrera, the head of marketing at Weight Watchers. “It really connects with prospective members.” Despite the subscriber gains, it’s been a rocky year for Weight Watchers. The company has billions of dollars in debt, and its stock is heavily shorted. In September, Chief Executive Officer Jim Chambers announced he was leaving after about three years at the helm. That sent the shares plummeting to their lowest level in nearly a year. Chambers had been working to turn around the company, an effort that included revamping its nutrition program and adding a social-networking component to its app. Now Weight Watchers is being run by a trio of executives who constitute the “interim office of the CEO.” The ads rolling out next week show Winfrey doing yoga and cooking pasta. As consumers have changed their perception of wellness, Weight Watchers has pivoted to emphasize that weight loss and indulgence aren’t necessarily mutually exclusive. Winfrey will hammer that message home, Herrera said: “People want to know what’s on her mind and how she’s living her life. This will definitely get people’s attention.” Oprah is down more than 40 lbs. The media guru is celebrating her weight-loss success in the first of two new ads for Weight Watchers, shared with PEOPLE. “Since I started Weight Watchers, I’ve lost over 40 lbs,” Winfrey, 62, says in the ad. “I can honestly tell you, I struggle no more.” Not only can she still eat her favorite food—bread—but the program allows her to indulge in other treats that would typically be off-limits on a diet. “I’m eating everything I love—tacos, pasta. I’ve never felt deprived,” she says. Winfrey, who is a Weight Watchers shareholder, said the program is less of a diet and more of a life change. “Weight Watchers is easier than any other program I’ve ever been on,” she said in a press release shared with PEOPLE. “It’s a lifestyle, a way of eating and a way of living that’s so freeing. You never feel like you are on a diet and it works.” “I would say to anyone who’s thinking of joining Weight Watchers: Take the leap and get about the business of enjoying a fantastic and full life,” Winfrey says in the commercial. She’ll share some of her favorite Weight Watchers–friendly recipes in her new cookbook, Food, Health and Happiness, out on Jan. 3, along with deeply personal stories about her weight-loss journey. Since joining Weight Watchers in August 2015, Oprah has been steadily dropping pounds. By January 2016, she had lost 26 lb through healthy eating. She also stepped up her exercise routine, vowing in May to log at least 10,000 steps a day. “I try to do something every day that allows me to feel active, and I don’t make myself crazy about it,” she told PEOPLE. “I just know that movement and flexibility, particularly the older you get, is what makes you feel alive. So I don’t want to just be alive; I want to feel it.”
Weight Watchers International, Inc.'s stock got another boost from Oprah Winfrey's weight loss; she is the company's backer and most famous customer. The shares climbed as much as nineteen percent after Winfrey announced that she'd lost forty pounds using the program. The diet company, whose shares had fallen fifty-four percent this year before Thursday's surge, is unveiling new TV ads featuring Winfrey that tout her weight loss. The media magnate and talk-show veteran became a centerpiece of Weight Watchers' comeback plan last year when she bought a stake in the company and joined the board. In October two thousand fifteen, when Weight Watchers first announced its deal with Winfrey, the shares more than doubled in a single day. Although that rally faded, Winfrey's subsequent endorsements have given the stock temporary jolts. The shares gained twenty-seven percent over two days last December when Oprah tweeted a video about using the weight-loss program, and there was a twenty percent bump in January after Winfrey said she had lost twenty-six pounds while still eating bread every day. The latest marketing push begins next week, just before the New Year's holiday. The coming weeks are critical for Weight Watchers: the company typically adds about forty percent of new customers in the first quarter, when resolutions push people to seek out diets. The stock climbed as high as twelve dollars and fifty cents on Thursday, its biggest intraday gain since February. Heavy Competition: Weight Watchers has been hit hard by the rise of free fitness apps and a move away from strict calorie counting among dieters. But Winfrey’s wide-ranging influence has brought new life to the brand. Weight Watchers has added subscribers for three straight quarters after years of declines. The New York-based company hopes that Winfrey’s new ads, which emphasize that she still enjoys pasta and tacos, will convince dieters that its updated program is an effective weight-loss tool. “It’s vitally important that Oprah is living the program and articulating it in a way that’s authentic,” said Maurice Herrera, the head of marketing at Weight Watchers. “It really connects with prospective members.” Despite the subscriber gains, it’s been a rocky year for Weight Watchers. The company has billions of dollars in debt, and its stock is heavily shorted. In September, Chief Executive Officer Jim Chambers announced he was leaving after about three years at the helm. That sent the shares plummeting to their lowest level in nearly a year. Chambers had been working to turn around the company, an effort that included revamping its nutrition program and adding a social-networking component to its app. Now Weight Watchers is being run by a trio of executives who constitute the “interim office of the CEO.” The ads rolling out next week show Winfrey doing yoga and cooking pasta. As consumers have changed their perception of wellness, Weight Watchers has pivoted to emphasize that weight loss and indulgence aren’t necessarily mutually exclusive. Winfrey will hammer that message home, Herrera said: “People want to know what’s on her mind and how she’s living her life. This will definitely get people’s attention.” Oprah is down more than forty pounds. The media guru is celebrating her weight-loss success in the first of two new ads for Weight Watchers, shared with PEOPLE. “Since I started Weight Watchers, I’ve lost over forty pounds,” Winfrey, sixty-two, says in the ad. “I can honestly tell you, I struggle no more.” Not only can she still eat her favorite food—bread—but the program allows her to indulge in other treats that would typically be off-limits on a diet. “I’m eating everything I love—tacos, pasta. I’ve never felt deprived,” she says. Winfrey, who is a Weight Watchers shareholder, said the program is less of a diet and more of a life change. “Weight Watchers is easier than any other program I’ve ever been on,” she said in a press release shared with PEOPLE. “It’s a lifestyle, a way of eating and a way of living that’s so freeing. You never feel like you are on a diet and it works.” “I would say to anyone who’s thinking of joining Weight Watchers: Take the leap and get about the business of enjoying a fantastic and full life,” Winfrey says in the commercial. She’ll share some of her favorite Weight Watchers–friendly recipes in her new cookbook, Food, Health and Happiness, out on Jan. three, along with deeply personal stories about her weight-loss journey. Since joining Weight Watchers in August two thousand fifteen, Oprah has been steadily dropping pounds. By January two thousand sixteen, she had lost twenty-six lb through healthy eating. She also stepped up her exercise routine, vowing in May to log at least ten thousand steps a day. “I try to do something every day that allows me to feel active, and I don’t make myself crazy about it,” she told PEOPLE. “I just know that movement and flexibility, particularly the older you get, is what makes you feel alive. So I don’t want to just be alive; I want to feel it.”.
long_en_240
news_en
731
en
A Papua New Guinean plane sank in a lagoon after overshooting the runway in the Federated States of Micronesia. All passengers were reportedly rescued safely from Air Niugini’s partially submerged Boeing 737-800 after local fishers took their boats out to the crash site almost immediately. Videos showed dozens of people in boats around the wreckage. Locals reported broken bones among the injuries after the flight came in very low for landing and ended up in the water. Flight 73 operates between Pohnpei in the Federated States of Micronesia and Port Moresby, stopping in Chuuk State. Various reports said up to 57 people were on board, including 11 crew and between 36 and 46 passengers; evacuees were taken to hospital. The aircraft is believed to be a 13-year-old Boeing 737-800 previously operated by Jet Airways and Air India Express. It had been involved in a collision at Port Moresby in May, when a cargo plane clipped its wing while turning, according to Papua New Guinea’s Accident Investigation Commission. The Air Niugini plane that landed in the ocean appears to be Flight PX073, which was scheduled to fly Pohnpei–Chuuk–Port Moresby. Photos show small boats conducting rescues. Matthew Colson, a Baptist missionary on the island, said it had been raining but was not windy when the plane landed. Colson, who has lived on the island most of his life and runs a radio station, spoke with residents and officials in the aftermath. Passengers included a small number of locals alongside predominantly US and Australian passengers, he said. The plane crashed into the water near a market where fishers had come to sell their catch. "They just went straight out there and started hauling people to shore," Colson said. Air Niugini had only recently begun flying that route with larger Boeing planes, Colson said. "United is mostly the only airline that comes out here, and it's been that way for years … There are flights every day but this has never happened before. Mainly because this route is considered one of United's hardest routes for the 737, so they … send their best pilots out here for the island hopper." Colson also interviewed one of the passengers, Bill Jaynes, a journalist based in Pohnpei. "It was surreal," Jaynes said in a video posted to Facebook. "I thought we had just landed hard until I looked over and saw a hole in the side of the plane and water coming in, and thought, 'This is not the way it's supposed to happen.' "We came in low, very low. Unfortunately the flight attendants panicked and started yelling. I was trying to be calm and help as much as I could." Jayne said he wasn't seriously hurt, but there were "pretty severe injuries" among other passengers. "I was really impressed with the locals who immediately came out in boats. One might think they'd be afraid to approach a plane that had just crashed, but they were awesome." Colson told Guardian Australia that there were some broken limbs and head injuries among passengers, who were not braced for impact. John Merelli, an employee at the High Tide hotel near the end of the runway, said he heard the plane coming in but thought it was a normal landing. He went back to work, someone told him, and from the rooftop he saw the plane starting to go underwater. "It was sinking," he said. "It's underwater now." He added that ordinary people in boats were the first rescuers, arriving in around five minutes, while officials took about 10 minutes. Another employee said the runway was known to be very short. Ethan Klapper reported that an Air Niugini 737 overran the runway at Weno Airport (TKK) in Chuuk, Micronesia. The runway is 6,013 feet long, relatively short for an airport with airline service. A flotilla of small boats rescued all 47 passengers and crew after the aircraft landed in water on Friday morning. Flight 73 was stopping over at Weno on its way from Pohnpei to Port Moresby when the incident occurred. A reporter who flew into the airport in July noted how hard the braking is on landing. "It was supposed to land but instead of landing it was 150 yards (135 metres) short and she went down," Jimmy Emilio, general manager of Chuuk Airport, told Reuters.
A Papua New Guinean plane sank in a lagoon after overshooting the runway in the Federated States of Micronesia. All passengers were reportedly rescued safely from Air Niugini’s partially submerged Boeing seven hundred thirty-seven- eight hundred after local fishers took their boats out to the crash site almost immediately. Videos showed dozens of people in boats around the wreckage. Locals reported broken bones among the injuries after the flight came in very low for landing and ended up in the water. Flight seventy three operates between Pohnpei in the Federated States of Micronesia and Port Moresby, stopping in Chuuk State. Various reports said up to fifty-seven people were on board, including eleven crew and between thirty-six and forty-six passengers; evacuees were taken to hospital. The aircraft is believed to be a thirteen-year-old Boeing seven hundred thirty-seven- eight hundred previously operated by Jet Airways and Air India Express. It had been involved in a collision at Port Moresby in May, when a cargo plane clipped its wing while turning, according to Papua New Guinea’s Accident Investigation Commission. The Air Niugini plane that landed in the ocean appears to be Flight PX073, which was scheduled to fly Pohnpei–Chuuk–Port Moresby. Photos show small boats conducting rescues. Matthew Colson, a Baptist missionary on the island, said it had been raining but was not windy when the plane landed. Colson, who has lived on the island most of his life and runs a radio station, spoke with residents and officials in the aftermath. Passengers included a small number of locals alongside predominantly US and Australian passengers, he said. The plane crashed into the water near a market where fishers had come to sell their catch. "They just went straight out there and started hauling people to shore," Colson said. Air Niugini had only recently begun flying that route with larger Boeing planes, Colson said. "United is mostly the only airline that comes out here, and it's been that way for years … There are flights every day but this has never happened before. Mainly because this route is considered one of United's hardest routes for the seven three seven, so they … send their best pilots out here for the island hopper." Colson also interviewed one of the passengers, Bill Jaynes, a journalist based in Pohnpei. "It was surreal," Jaynes said in a video posted to Facebook. "I thought we had just landed hard until I looked over and saw a hole in the side of the plane and water coming in, and thought, 'This is not the way it's supposed to happen.' "We came in low, very low. Unfortunately the flight attendants panicked and started yelling. I was trying to be calm and help as much as I could." Jayne said he wasn't seriously hurt, but there were "pretty severe injuries" among other passengers. "I was really impressed with the locals who immediately came out in boats. One might think they'd be afraid to approach a plane that had just crashed, but they were awesome." Colson told Guardian Australia that there were some broken limbs and head injuries among passengers, who were not braced for impact. John Merelli, an employee at the High Tide hotel near the end of the runway, said he heard the plane coming in but thought it was a normal landing. He went back to work, someone told him, and from the rooftop he saw the plane starting to go underwater. "It was sinking," he said. "It's underwater now." He added that ordinary people in boats were the first rescuers, arriving in around five minutes, while officials took about ten minutes. Another employee said the runway was known to be very short. Ethan Klapper reported that an Air Niugini seven hundred thirty seven overran the runway at Weno Airport (TKK) in Chuuk, Micronesia. The runway is six thousand thirteen feet long, relatively short for an airport with airline service. A flotilla of small boats rescued all forty seven passengers and crew after the aircraft landed in water on Friday morning. Flight seventy three was stopping over at Weno on its way from Pohnpei to Port Moresby when the incident occurred. A reporter who flew into the airport in July noted how hard the braking is on landing. "It was supposed to land but instead of landing it was one hundred fifty yards (one hundred thirty five metres) short and she went down," Jimmy Emilio, general manager of Chuuk Airport, told Reuters.
long_en_227
news_en
693
en
For the first time in seven years, the U.S. birth rate increased by 1 percent in 2014 compared to 2013, the Centers for Disease Control and Prevention reported. The increase was not universal: the teen birth rate dropped by 9 percent, a record low, while births among women in their 30s rose 3 percent and among women in their 40s rose 2 percent. The general fertility rate grew to 63 births per 1,000 women in 2014, up from 62.5 per 1,000 in 2013. The change was apparent across races except among American Indian and Alaska Native women, who experienced a decrease. The proportion of births among older married women increased, while the proportion among unmarried younger women declined. There are likely many reasons for this shift, but an increase in contraceptive use likely played a role: more women are using condoms and the pill than 20 years ago, and IUDs are gaining popularity because of their ease of use. According to the CDC, the most effective contraceptives for teens are IUDs and implants, and their increased use appears to be paying off. Teen births have fallen by more than 60% over the past 25 years. The most recent peak was in 1991, when there were about 62 births per 1,000 women aged 15 to 19; today the rate is 24 per 1,000. The change is gradual but important: children born to teenage parents are more likely to be born prematurely and to suffer poor health early in life. That wasn't the only large drop in 2014, as the U.S. also saw a 2% decline in births among women in their 20s. Like the teen birth rate, this is part of an ongoing trend; the birth rate among women in their early 20s has declined steadily since 2007. Millennial women may be delaying childbirth longer than previous generations, since prime childbearing years often coincide with crucial career-advancement years. A recent study shows that the marital pay gap that emerges after a first child is born typically does not close if the birth occurs between ages 25 and 35. Shannon Hettinger, 32, of Washington, D.C., said she definitely wants children. She grew up in a large family in a small Pennsylvania town, and almost all her high school friends are married with children, but she moved to Washington and spent her 20s deciding on a career. Now that she has one she loves — she works in residential real estate sales — she is not going to stop until she gets established. That means not having children for a while. "I just want to build my book of business and see where I can go from here," she said. "My whole focus is career growth. That's my No. 1 priority." "Once I achieve a certain level of success," she added, "then I'll start thinking about a family." Ivy Gray-Klein, 26, who lives in Philadelphia and works at the University of Pennsylvania School of Design, said she was open to having children but cannot imagine doing so until she is 30 or 35. She wants to feel settled in her own life first. Now she has three roommates, is paying down her student loans, and is working to build a little bit of savings. "I'm just really trying to get myself to a place that is solid," she said by phone. "Having a child right now would be so destabilizing. Children just seem like such an enormous financial undertaking." The country has had declines before. The longest period of continuous decline on record was from 1958 to 1968, according to Brady E. Hamilton, a statistician and demographer with the National Center for Health Statistics. The United States has been tracking the number of births and birthrates since about 1910. The most recent decline has been deepest for minorities. The fertility rate among Hispanic women dropped more than 27 percent between 2007 and 2016, the most recent year of data by race. The rate among White people has dropped by about 4%, among Black people by about 11%, and among Asian people by about 5%.
For the first time in seven years, the U.S. birth rate increased by one percent in two thousand fourteen compared to two thousand thirteen, the Centers for Disease Control and Prevention reported. The increase was not universal: the teen birth rate dropped by nine percent, a record low, while births among women in their thirties rose three percent and among women in their forties rose two percent. The general fertility rate grew to sixty three births per one thousand women in two thousand fourteen, up from sixty two point five per one thousand in two thousand thirteen. The change was apparent across races except among American Indian and Alaska Native women, who experienced a decrease. The proportion of births among older married women increased, while the proportion among unmarried younger women declined. There are likely many reasons for this shift, but an increase in contraceptive use likely played a role: more women are using condoms and the pill than twenty years ago, and IUDs are gaining popularity because of their ease of use. According to the CDC, the most effective contraceptives for teens are IUDs and implants, and their increased use appears to be paying off. Teen births have fallen by more than sixty percent over the past twenty five years. The most recent peak was in nineteen ninety one, when there were about sixty two births per one thousand women aged fifteen to nineteen; today the rate is twenty four per one thousand. The change is gradual but important: children born to teenage parents are more likely to be born prematurely and to suffer poor health early in life. That wasn't the only large drop in two thousand fourteen, as the U.S. also saw a two percent decline in births among women in their twenties. Like the teen birth rate, this is part of an ongoing trend; the birth rate among women in their early twenties has declined steadily since two thousand seven. Millennial women may be delaying childbirth longer than previous generations, since prime childbearing years often coincide with crucial career-advancement years. A recent study shows that the marital pay gap that emerges after a first child is born typically does not close if the birth occurs between ages twenty-five and thirty-five. Shannon Hettinger, thirty-two, of Washington, D.C., said she definitely wants children. She grew up in a large family in a small Pennsylvania town, and almost all her high school friends are married with children, but she moved to Washington and spent her twenties deciding on a career. Now that she has one she loves — she works in residential real estate sales — she is not going to stop until she gets established. That means not having children for a while. "I just want to build my book of business and see where I can go from here," she said. "My whole focus is career growth. That's my No. one priority." "Once I achieve a certain level of success," she added, "then I'll start thinking about a family." Ivy Gray-Klein, twenty-six, who lives in Philadelphia and works at the University of Pennsylvania School of Design, said she was open to having children but cannot imagine doing so until she is thirty or thirty-five. She wants to feel settled in her own life first. Now she has three roommates, is paying down her student loans, and is working to build a little bit of savings. "I'm just really trying to get myself to a place that is solid," she said by phone. "Having a child right now would be so destabilizing. Children just seem like such an enormous financial undertaking." The country has had declines before. The longest period of continuous decline on record was from nineteen fifty eight to nineteen sixty eight, according to Brady E. Hamilton, a statistician and demographer with the National Center for Health Statistics. The United States has been tracking the number of births and birthrates since about nineteen ten. The most recent decline has been deepest for minorities. The fertility rate among Hispanic women dropped more than twenty seven percent between two thousand seven and two thousand sixteen, the most recent year of data by race. The rate among White people has dropped by about four percent, among Black people by about eleven percent, and among Asian people by about five percent.
long_en_266
wiki_en
634
en
Nehru College of Engineering and Research Centre (NCERC) is a private engineering college in Thiruvilwamala, Thrissur District, Kerala, India. The college is approved by the All India Council for Technical Education (AICTE), accredited by the National Assessment and Accreditation Council (NAAC), and affiliated to the University of Calicut. NCERC was also awarded ISO 9001:2008 certification. The college is run by the Nehru College of Educational and Charitable Trust, established in 1968 and headquartered in Kuniamuthur, Coimbatore District, Tamil Nadu. The trust operates 12 educational institutions in Kerala and Tamil Nadu under the Nehru Group of Institutions. In January 2017 the college was in the news after a student died by suicide; media reports said he had been allegedly harassed by college authorities over purported examination malpractices, and the case prompted other students in Kerala to come forward with accounts of mistreatment at self-financed colleges. The college has been reported to enforce unconventional rules, with penalties for growing beards, not wearing an ID tag, being late, cutting a cake for celebrations, and similar conduct; students who questioned these policies have reportedly faced reduced internal marks or denial of exam attendance. There is an undeclared ban on boys and girls being together on campus. Student suicide controversy An 18-year-old computer science student named Jishnu Pranoy was found hanging in his hostel bathroom on January 6, 2017, allegedly after being harassed by the college management when he was caught cheating in a university semester exam. There were torture marks on Jishnu’s body, and college authorities reportedly refused to take him to the hospital. His death sparked violent protests, with many students coming forward to share the harassment they faced at the college. It was reportedly revealed that the college had "torture rooms" where students were assaulted. Students were also fined for beards longer than 0.02 millimeters and were not allowed to have girlfriends or sit on the same bench with girls in the classroom. Even cutting cakes in the classroom to celebrate birthdays was banned. The Kerala State Human Rights Commission has sought a report on the matter. Many mainstream media out A Facebook campaign page called "Justice for Jishnu" was started, where students shared posts alleging that emotional harassment by college authorities was an everyday affair. The mystery surrounding Jishnu's death deepened when the college's claim that he had been "caught copying" was disproved by investigations. The university investigation team assigned by Kerala Technological University (KTU) said they had not received any complaints regarding Jishnu's alleged copying. According to university rules, the college was required to report any exam malpractice incidents on the same day. After inspecting the bench where Jishnu had sat, and assessing the distance and angle to other students, A.D.G.P. Sudesh Kumar concluded there was no chance Jishnu could have copied in the manner the college claimed. All students testified that Jishnu did not copy. Jishnu's postmortem report showed multiple injuries to his nose, face, lips, and neck, suggesting physical assault. Classmates alleged that the vice-principal and the college PRO had physically assaulted Jishnu, after which he committed suicide. After this incident, students from many other private engineering colleges came forward with stories of harassment by management. When the protest turned violent, the college management suspended Vice-Principal Dr. N. K. Sakthivel, teacher Praveen C. P., and PRO Sanjith Viswanathan. The Kerala Chief Minister said many organisations had started colleges with an eye on making profits, and that even liquor barons established colleges and auctioned job opportunities in those institutions. He said the government would inquire into the "corruption and loot" prevalent in the self-financing educational sector. "Ever since self-financed colleges came up, many saw the education sector as a business with a potential for huge profits."
Nehru College of Engineering and Research Centre (NCERC) is a private engineering college in Thiruvilwamala, Thrissur District, Kerala, India. The college is approved by the All India Council for Technical Education (AICTE), accredited by the National Assessment and Accreditation Council (NAAC), and affiliated to the University of Calicut. NCERC was also awarded ISO nine thousand one colon two thousand eight certification. The college is run by the Nehru College of Educational and Charitable Trust, established in nineteen sixty eight and headquartered in Kuniamuthur, Coimbatore District, Tamil Nadu. The trust operates twelve educational institutions in Kerala and Tamil Nadu under the Nehru Group of Institutions. In January two thousand seventeen the college was in the news after a student died by suicide; media reports said he had been allegedly harassed by college authorities over purported examination malpractices, and the case prompted other students in Kerala to come forward with accounts of mistreatment at self-financed colleges. The college has been reported to enforce unconventional rules, with penalties for growing beards, not wearing an ID tag, being late, cutting a cake for celebrations, and similar conduct; students who questioned these policies have reportedly faced reduced internal marks or denial of exam attendance. There is an undeclared ban on boys and girls being together on campus. Student suicide controversy An eighteen-year-old computer science student named Jishnu Pranoy was found hanging in his hostel bathroom on January six, two thousand seventeen, allegedly after being harassed by the college management when he was caught cheating in a university semester exam. There were torture marks on Jishnu’s body, and college authorities reportedly refused to take him to the hospital. His death sparked violent protests, with many students coming forward to share the harassment they faced at the college. It was reportedly revealed that the college had "torture rooms" where students were assaulted. Students were also fined for beards longer than zero point zero two millimeters and were not allowed to have girlfriends or sit on the same bench with girls in the classroom. Even cutting cakes in the classroom to celebrate birthdays was banned. The Kerala State Human Rights Commission has sought a report on the matter. Many mainstream media out A Facebook campaign page called "Justice for Jishnu" was started, where students shared posts alleging that emotional harassment by college authorities was an everyday affair. The mystery surrounding Jishnu's death deepened when the college's claim that he had been "caught copying" was disproved by investigations. The university investigation team assigned by Kerala Technological University (KTU) said they had not received any complaints regarding Jishnu's alleged copying. According to university rules, the college was required to report any exam malpractice incidents on the same day. After inspecting the bench where Jishnu had sat, and assessing the distance and angle to other students, A.D.G.P. Sudesh Kumar concluded there was no chance Jishnu could have copied in the manner the college claimed. All students testified that Jishnu did not copy. Jishnu's postmortem report showed multiple injuries to his nose, face, lips, and neck, suggesting physical assault. Classmates alleged that the vice-principal and the college PRO had physically assaulted Jishnu, after which he committed suicide. After this incident, students from many other private engineering colleges came forward with stories of harassment by management. When the protest turned violent, the college management suspended Vice-Principal Dr. N. K. Sakthivel, teacher Praveen C. P., and PRO Sanjith Viswanathan. The Kerala Chief Minister said many organisations had started colleges with an eye on making profits, and that even liquor barons established colleges and auctioned job opportunities in those institutions. He said the government would inquire into the "corruption and loot" prevalent in the self-financing educational sector. "Ever since self-financed colleges came up, many saw the education sector as a business with a potential for huge profits."
long_en_211
news_en
714
en
To some, board games are a simple reprieve from stress and constant connectedness. For Travis and Holly Hancock, they're the foundation of their startup, Facade Games, which has grossed over $1 million since they began it as a side hustle. The Columbus, Ohio–based husband-and-wife duo first toyed with creating a board game in 2010. They launched their first funded game, Salem 1692, in March 2015 after about two years of development, starting with roughly $500 in seed money and no prior industry experience. The Kickstarter campaign hit its $6,000 goal in less than a day and ultimately raised $103,000—1,722% of the goal. They have since run three successful Kickstarter campaigns; their latest has raised over $400,000 with three days to go. Their success, they say, comes from the attention to detail and the energy they invested in the games. He recalls his mother saying, "As soon as a game box is ruined, the game is ruined; you never pull it out anymore because it looks like trash." The couple found a manufacturer on Alibaba that made a prototype hollowed-out book to house the game for $100. Facade Games — the Hancocks — came up with the idea of using books as cases for their games; they say each game tells a different story. Travis, who was living with Holly in Utah at the time, also attended SaltCON, the state's largest board game convention, and asked experts for advice on getting started. He later found a small-batch card maker on MakePlayingCards.com to create the prototype's other pieces, and they drew on Holly's background in graphic design. On a Brigham Young University job site, the couple found an illustrator to help bring their characters to life. For two years, after clocking out of work — Travis from his digital marketing job and Holly from teaching — the Hancocks tinkered with their prototype. When they were finally finished, they shot a promotional video for Kickstarter on an iPhone and hired a voice-over actor on Fiverr to speak in Old English to evoke the era in which 1692 was set. Travis says it does wonders for bringing people into the game. They have since used pirate-themed and Western voice-overs to promote later games too. After the campaign, in November 2015, the Hancocks distributed the finished versions of Salem 1692 to backers, and Travis quit his full-time job to split his time between developing a second game and doing contract marketing on the side. Holly, then 24, kept teaching so they would have health care benefits. The Facade Games co-founders joke that they hope their daughter, Margo, won't get sick of playing their board games. In January 2017, after following a similar process as with their first game, the Hancocks launched a Kickstarter for their pirate-themed mutiny game, Tortuga 1667 — and Travis held his breath. "I still wasn't sure if Salem was just a crazy anomaly that randomly did well on Kickstarter, and didn't know if our second game would just flop and we'd have to go back and get real jobs again," he recalls. It didn't flop: Tortuga 1667 raised over $400,000, shattering its fundraising goal by 4,072 percent. Friends tested an iteration of an earlier Hancock idea before it launched on Kickstarter. "That was probably my favorite moment as an entrepreneur," Travis says. "Even more so than the first game — because maybe it was just chance, but with the second one, seeing that they believed in us twice brought me to tears that day." The success led the Hancocks to focus on Facade Games full-time. They converted the business from a limited liability company to an S-corp to start taking salaries. Owning their business is paying dividends and giving them the flexibility to enjoy parenting now that they have a one-year-old daughter, Margo. "I've always been stubborn in that I don't want anyone to control me, so I think that's why I wanted to be my own boss," Travis says. "I kind of knew I was going to have a family one day and thought it'd be really fun to be home with them more."
To some, board games are a simple reprieve from stress and constant connectedness. For Travis and Holly Hancock, they're the foundation of their startup, Facade Games, which has grossed over one million dollars since they began it as a side hustle. The Columbus, Ohio–based husband-and-wife duo first toyed with creating a board game in two thousand ten. They launched their first funded game, Salem 1692, in March two thousand fifteen after about two years of development, starting with roughly five hundred dollars in seed money and no prior industry experience. The Kickstarter campaign hit its six thousand dollars goal in less than a day and ultimately raised one hundred three thousand dollars—one thousand seven hundred twenty two percent of the goal. They have since run three successful Kickstarter campaigns; their latest has raised over four hundred thousand dollars with three days to go. Their success, they say, comes from the attention to detail and the energy they invested in the games. He recalls his mother saying, "As soon as a game box is ruined, the game is ruined; you never pull it out anymore because it looks like trash." The couple found a manufacturer on Alibaba that made a prototype hollowed-out book to house the game for one hundred dollars. Facade Games — the Hancocks — came up with the idea of using books as cases for their games; they say each game tells a different story. Travis, who was living with Holly in Utah at the time, also attended SaltCON, the state's largest board game convention, and asked experts for advice on getting started. He later found a small-batch card maker on MakePlayingCards dot com to create the prototype's other pieces, and they drew on Holly's background in graphic design. On a Brigham Young University job site, the couple found an illustrator to help bring their characters to life. For two years, after clocking out of work — Travis from his digital marketing job and Holly from teaching — the Hancocks tinkered with their prototype. When they were finally finished, they shot a promotional video for Kickstarter on an iPhone and hired a voice-over actor on Fiverr to speak in Old English to evoke the era in which sixteen ninety-two was set. Travis says it does wonders for bringing people into the game. They have since used pirate-themed and Western voice-overs to promote later games too. After the campaign, in November two thousand fifteen, the Hancocks distributed the finished versions of Salem sixteen ninety-two to backers, and Travis quit his full-time job to split his time between developing a second game and doing contract marketing on the side. Holly, then twenty-four, kept teaching so they would have health care benefits. The Facade Games co-founders joke that they hope their daughter, Margo, won't get sick of playing their board games. In January two thousand seventeen, after following a similar process as with their first game, the Hancocks launched a Kickstarter for their pirate-themed mutiny game, Tortuga sixteen sixty-seven — and Travis held his breath. "I still wasn't sure if Salem was just a crazy anomaly that randomly did well on Kickstarter, and didn't know if our second game would just flop and we'd have to go back and get real jobs again," he recalls. It didn't flop: Tortuga one thousand six hundred sixty-seven raised over four hundred thousand dollars, shattering its fundraising goal by four thousand seventy-two percent. Friends tested an iteration of an earlier Hancock idea before it launched on Kickstarter. "That was probably my favorite moment as an entrepreneur," Travis says. "Even more so than the first game — because maybe it was just chance, but with the second one, seeing that they believed in us twice brought me to tears that day." The success led the Hancocks to focus on Facade Games full-time. They converted the business from a limited liability company to an S-corp to start taking salaries. Owning their business is paying dividends and giving them the flexibility to enjoy parenting now that they have a one-year-old daughter, Margo. "I've always been stubborn in that I don't want anyone to control me, so I think that's why I wanted to be my own boss," Travis says. "I kind of knew I was going to have a family one day and thought it'd be really fun to be home with them more.".
long_en_140
paper_en
1,374
en
We compare the performance of Qwen2.5-Instruct models against several leading language models, including GPT-4, Claude3.5-sonnet, Qwen2, and Llama-3.1, across both English and Chinese languages. Our analysis focuses on model size and its impact on performance, as well as how our latest Qwen2.5 series compares to previous iterations and competing models. For smaller models, we observe that the Qwen2.5-0.5B model achieves performance that is on par with or even surpasses the Qwen2-1.5B model. This indicates that the Qwen2.5 series has optimized parameter usage, enabling mid-sized models to achieve similar performance levels to larger models from the previous generation. The Qwen2.5-3B model demonstrates performance that is comparable to the Qwen2-7B model. Notably, the Qwen2.5-32B model exhibits a remarkable improvement over the Qwen2-72B model. Our flagship model, Qwen2.5-72B, further narrows the gap between Qwen and state-of-the-art models like GPT-4 and Claude3.5-sonnet. In particular, Qwen2.5-72B matches or exceeds the performance of Llama-3.1-405B in all metrics except for instruction following. This achievement underscores the competitiveness of Qwen2.5-72B in a wide range of language processing tasks, while also identifying areas for future improvement. Qwen2.5-Plus addresses the previous shortcomings in Chinese instruction following and further enhances its advantages in other areas. Multilingual Evaluation To comprehensively evaluate the multilingual capabilities of instruction-tuned models, we followed P-MMEval and extended several benchmarks as follows: (1) IFEval (Multilingual): We expanded the IFEval benchmark, originally in English, to include multilingual examples. To ensure language neutrality, we removed instances that contained language-specific content (e.g., "start with letter A"). (2) Knowledge Utilization: to assess the knowledge utilization abilities of the Qwen2.5 series models across multiple languages, we employed five MMLU-like benchmarks (multiple-choice format). These benchmarks include: AMMLU (Arabic), JMMLU (Japanese), KMMLU (Korean), IndoMMLU (Indonesian), and TurkishMMLU (Turkish). Additionally, we evaluated the models' performance on the translated version of the MMLU benchmark (okapi_MMLU), which has been adapted into multiple languages from its original English form. (3) MGSM8K (Extended): Building upon the original MGSM8K benchmark, we extended the language support to include Arabic (ar), Korean (ko), Portuguese (pt), and Vietnamese (vi). (4) Cultural Nuances: To evaluate the models' ability to capture cultural nuances, we utilized the BLEnD benchmark. This benchmark is specifically designed to test LLMs on their understanding of cultural subtleties. Qwen2.5 exhibits competitive performance in instruction following, multilingual knowledge, and mathematical reasoning, aligning well with models of comparable size. Although it shows notable improvements in capturing cultural nuances relative to its predecessor, Qwen2, there remains potential for further refinement in this domain. Subsubsection 5.2.3 Reward Model The reward model serves as the cornerstone for guiding RL processes, and thus we conduct a separate evaluation of the reward model used in the Qwen2.5 series. Our assessment benchmarks encompass Reward Bench, RMB, PPE, and an internally collected out-of-domain Chinese human preference benchmark (Human-Preference-Chinese) to provide a comprehensive analysis. For comparison, we included baseline models such as Nemotron-4-340B-Reward, Llama-3.1-Nemotron-70B-Reward, and Athene-RM-70B. Overall, our findings indicate that Llama-3.1-Nemotron-70B-Reward excels on the Reward Bench, while Athene-RM-70B performs best on the RMB benchmark. The Qwen2.5-RM-72B, leads in both the PPE and Human-Preference-Chinese evaluations, ranking second only to Athene-RM-70B on the RMB and achieving a performance level comparable to Nemotron-4-340B-Reward on the Reward Bench, albeit slightly behind Llama-3.1-Nemotron-70B-Reward. Due to the lack of evaluation methods for reward models, current reward models are typically evaluated using Reward Bench. However, our evaluation results from multiple RM benchmarks suggest that over-optimization on a specific benchmark may trigger Goodhart's law, resulting in degraded performance on other benchmarks and potentially impacting downstream alignment performance. This highlights the need for comprehensive evaluation of reward models across diverse benchmarks rather than relying solely on a single benchmark. More importantly, through iterative experimentation, we have also come to recognize a critical limitation: current reward model evaluation benchmarks do not accurately predict the performance of the RL models trained under their guidance. In other words, a higher score on RM benchmarks does not necessarily correlate with superior performance of the resulting RL model. This insight underscores the need for further research into more predictive evaluation methods for reward models. Subsubsection 5.2.4 Long Context Capabilities We utilize three benchmarks to evaluate long context capabilities of Qwen2.5 models: RULER, LV-Eval, and Longbench-Chat. In LV-Eval, we adopt keyword recall as the reported score to mitigate the high rate of false negatives present in the original metrics. We can observe that the Qwen2.5 models, after equipping length extrapolation techniques (i.e., DCA + YARN), have demonstrated strong long context processing capabilities on the three datasets. Among them, Qwen2.5-72B-Instruct has shown the strongest performance across all context lengths, significantly outperforming existing open-weight long-context models as well as the proprietary models like GPT-4o-mini and GPT-4. Furthermore, Qwen2.5-Turbo achieves 100% accuracy in the 1M-token passkey retrieval task, demonstrating its exceptional ability to capture detailed information from ultra-long contexts. We develop a sparse attention mechanism based on Minference to significantly enhance inference speed, which is critical for user experience when processing long contexts. For sequences of 1M tokens, this approach reduces the computational load of the attention mechanism by 12.5 times. Our method achieves a 3.2 to 4.3 times speedup in time to first token. Section 6 Conclusion Qwen2.5 represents a significant advancement in large language models (LLMs), with enhanced pre-training on 18 trillion tokens and sophisticated post-training techniques, including supervised fine-tuning and multi-stage reinforcement learning. These improvements boost human preference alignment, long text generation, and structural data analysis, making Qwen2.5 highly effective for instruction-following tasks. Available in various configurations, Qwen2.5 offers both open-weight from 0.5B to 72B parameters and proprietary models including cost-effective MoE variants like Qwen2.5-Turbo and Qwen2.5-Plus. Empirical evaluations show that Qwen2.5-72B-Instruct matches the performance of the state-of-the-art Llama-3-405B-Instruct, despite being six times smaller. Qwen2.5 also serves as a foundation for specialized models, demonstrating its versatility for domain-specific applications. We believe that Qwen2.5's robust performance, flexible architecture, and broad availability make it a valuable resource for both academic research and industrial applications, positioning it as a key player of future innovations. In the future, we will focus on advancing robust foundational models. First, we will iteratively refine both base and instruction-tuned large language models (LLMs) by incorporating broader, more diverse, higher-quality data. Second, we will also continue to develop multimodal models. Our goal is to integrate various modalities into a unified framework. This will facilitate seamless, end-to-end information processing across textual, visual, and auditory domains. Third, we are committed to enhancing the reasoning capabilities of our models. This will be achieved through strategic scaling of inference compute resources. These efforts aim to push the boundaries of current technological limitations and contribute to the broader field of artificial intelligence. Section Appendix Subsection More Evaluation Results Here we present more detailed evaluation results of our Qwen2 models and the baselines. Specifically, we demonstrate the coding performance in different code languages, and the language understanding performance in multilingual evaluation. Subsubsection Coding Instead of simply evaluating the models on the conventional coding benchmarks, namely HumanEval and MBPP, we extend the evaluation for coding to comprehensively test the models' capabilities in coding problem-solving. Specifically, we follow the practice of CodeQwen and implement the evaluation of EvalPlus and Multipl-E. Specifically, EvalPlus includes HumanEval and MBPP, as well as their new versions with extended test cases. MultiPL-E includes test sets in different code languages, including Python, C++, Java, PHP, TypeScript, C#, Bash, JavaScript and Go, and it aims at testing the capability of LLMs in understanding and generating codes in different languages.
We compare the performance of Qwen two point five-Instruct models against several leading language models, including GPT- four, Claude three point five-sonnet, Qwen two, and Llama- three point one, across both English and Chinese languages. Our analysis focuses on model size and its impact on performance, as well as how our latest Qwen two point five series compares to previous iterations and competing models. For smaller models, we observe that the Qwen two point five- zero point five B model achieves performance that is on par with or even surpasses the Qwen two- one point five B model. This indicates that the Qwen two point five series has optimized parameter usage, enabling mid-sized models to achieve similar performance levels to larger models from the previous generation. The Qwen two point five- three B model demonstrates performance that is comparable to the Qwen two- seven B model. Notably, the Qwen two point five- thirty-two B model exhibits a remarkable improvement over the Qwen two- seventy-two B model. Our flagship model, Qwen two point five- seventy-two B, further narrows the gap between Qwen and state-of-the-art models like GPT- four and Claude three point five-sonnet. In particular, Qwen two point five- seventy-two B matches or exceeds the performance of Llama- three point one- four hundred five B in all metrics except for instruction following. This achievement underscores the competitiveness of Qwen two point five- seventy-two B in a wide range of language processing tasks, while also identifying areas for future improvement. Qwen two point five-Plus addresses the previous shortcomings in Chinese instruction following and further enhances its advantages in other areas. Multilingual Evaluation To comprehensively evaluate the multilingual capabilities of instruction-tuned models, we followed P-MMEval and extended several benchmarks as follows: (one) IFEval (Multilingual): We expanded the IFEval benchmark, originally in English, to include multilingual examples. To ensure language neutrality, we removed instances that contained language-specific content (e.g., "start with letter A"). (two) Knowledge Utilization: to assess the knowledge utilization abilities of the Qwen two point five series models across multiple languages, we employed five MMLU-like benchmarks (multiple-choice format). These benchmarks include: AMMLU (Arabic), JMMLU (Japanese), KMMLU (Korean), IndoMMLU (Indonesian), and TurkishMMLU (Turkish). Additionally, we evaluated the models' performance on the translated version of the MMLU benchmark (okapi_MMLU), which has been adapted into multiple languages from its original English form. (three) MGSM eight K (Extended): Building upon the original MGSM eight K benchmark, we extended the language support to include Arabic (ar), Korean (ko), Portuguese (pt), and Vietnamese (vi). (four) Cultural Nuances: To evaluate the models' ability to capture cultural nuances, we utilized the BLEnD benchmark. This benchmark is specifically designed to test LLMs on their understanding of cultural subtleties. Qwen two point five exhibits competitive performance in instruction following, multilingual knowledge, and mathematical reasoning, aligning well with models of comparable size. Although it shows notable improvements in capturing cultural nuances relative to its predecessor, Qwen two, there remains potential for further refinement in this domain. Subsubsection five point two point three Reward Model The reward model serves as the cornerstone for guiding RL processes, and thus we conduct a separate evaluation of the reward model used in the Qwen two point five series. Our assessment benchmarks encompass Reward Bench, RMB, PPE, and an internally collected out of domain Chinese human preference benchmark (Human Preference Chinese) to provide a comprehensive analysis. For comparison, we included baseline models such as Nemotron four three hundred forty B Reward, Llama three point one Nemotron seventy B Reward, and Athene RM seventy B. Overall, our findings indicate that Llama three point one Nemotron seventy B Reward excels on the Reward Bench, while Athene RM seventy B performs best on the RMB benchmark. The Qwen two point five RM seventy two B, leads in both the PPE and Human Preference Chinese evaluations, ranking second only to Athene RM seventy B on the RMB and achieving a performance level comparable to Nemotron four three hundred forty B Reward on the Reward Bench, albeit slightly behind Llama three point one Nemotron seventy B Reward. Due to the lack of evaluation methods for reward models, current reward models are typically evaluated using Reward Bench. However, our evaluation results from multiple RM benchmarks suggest that over optimization on a specific benchmark may trigger Goodhart's law, resulting in degraded performance on other benchmarks and potentially impacting downstream alignment performance. This highlights the need for comprehensive evaluation of reward models across diverse benchmarks rather than relying solely on a single benchmark. More importantly, through iterative experimentation, we have also come to recognize a critical limitation: current reward model evaluation benchmarks do not accurately predict the performance of the RL models trained under their guidance. In other words, a higher score on RM benchmarks does not necessarily correlate with superior performance of the resulting RL model. This insight underscores the need for further research into more predictive evaluation methods for reward models. Subsubsection five point two point four Long Context Capabilities We utilize three benchmarks to evaluate long context capabilities of Qwen two point five models: RULER, LV Eval, and Longbench Chat. In LV Eval, we adopt keyword recall as the reported score to mitigate the high rate of false negatives present in the original metrics. We can observe that the Qwen two point five models, after equipping length extrapolation techniques (i.e., DCA plus YARN), have demonstrated strong long context processing capabilities on the three datasets. Among them, Qwen two point five seventy two B Instruct has shown the strongest performance across all context lengths, significantly outperforming existing open-weight long-context models as well as the proprietary models like GPT four o mini and GPT four. Furthermore, Qwen two point five Turbo achieves one hundred percent accuracy in the one M token passkey retrieval task, demonstrating its exceptional ability to capture detailed information from ultra-long contexts. We develop a sparse attention mechanism based on Minference to significantly enhance inference speed, which is critical for user experience when processing long contexts. For sequences of one M tokens, this approach reduces the computational load of the attention mechanism by twelve point five times. Our method achieves a three point two to four point three times speedup in time to first token. Section six Conclusion Qwen two point five represents a significant advancement in large language models (LLMs), with enhanced pre-training on eighteen trillion tokens and sophisticated post-training techniques, including supervised fine-tuning and multi-stage reinforcement learning. These improvements boost human preference alignment, long text generation, and structural data analysis, making Qwen two point five highly effective for instruction-following tasks. Available in various configurations, Qwen two point five offers both open-weight from zero point five B to seventy two B parameters and proprietary models including cost-effective MoE variants like Qwen two point five Turbo and Qwen two point five Plus. Empirical evaluations show that Qwen two point five seventy two B Instruct matches the performance of the state-of-the-art Llama three four hundred five B Instruct, despite being six times smaller. Qwen two point five also serves as a foundation for specialized models, demonstrating its versatility for domain-specific applications. We believe that Qwen two point five's robust performance, flexible architecture, and broad availability make it a valuable resource for both academic research and industrial applications, positioning it as a key player of future innovations. In the future, we will focus on advancing robust foundational models. First, we will iteratively refine both base and instruction-tuned large language models (LLMs) by incorporating broader, more diverse, higher-quality data. Second, we will also continue to develop multimodal models. Our goal is to integrate various modalities into a unified framework. This will facilitate seamless, end-to-end information processing across textual, visual, and auditory domains. Third, we are committed to enhancing the reasoning capabilities of our models. This will be achieved through strategic scaling of inference compute resources. These efforts aim to push the boundaries of current technological limitations and contribute to the broader field of artificial intelligence. Section Appendix Subsection More Evaluation Results Here we present more detailed evaluation results of our Qwen two models and the baselines. Specifically, we demonstrate the coding performance in different code languages, and the language understanding performance in multilingual evaluation. Subsubsection Coding Instead of simply evaluating the models on the conventional coding benchmarks, namely HumanEval and MBPP, we extend the evaluation for coding to comprehensively test the models' capabilities in coding problem-solving. Specifically, we follow the practice of CodeQwen and implement the evaluation of EvalPlus and Multipl-E. Specifically, EvalPlus includes HumanEval and MBPP, as well as their new versions with extended test cases. MultiPL-E includes test sets in different code languages, including Python, C plus plus, Java, PHP, TypeScript, C sharp, Bash, JavaScript and Go, and it aims at testing the capability of LLMs in understanding and generating codes in different languages.
long_en_311
wiki_en
503
en
The Revolutionary Party of Central American Workers (PRTC) was a political party in Central America. Ideology: The group that founded the PRTC was inspired by Marxism-Leninism, Che Guevara, and the Vietnamese national liberation struggle. The party was accused of Trotskyism by other revolutionary groups, an accusation it rejected. History: The PRTC was founded in 1975 by a sector that had left the ERP-RN after the 1972 elections. Clandestine pre-congress meetings for the party's founding were held in Costa Rica, Honduras, and El Salvador in 1975, and provisional zonal leaderships were formed in these countries. The party also formed cells in Mexico and the United States and established contacts with activists in Belize, Guatemala, Nicaragua, and Panama. In April 1975 the Liberation League was founded as a multisectoral mass front and became a front organization for the party. In December 1975 the plenary session of the party congress opened with delegates from around Central America. The PRTC was formally founded in San José on January 25, 1976, with Fabio Castillo Figueroa as general secretary. The party adopted democratic centralism as its organizational principle. During 1976–1978, the party built organizational structures throughout Central America except in Nicaragua; its leadership was based in Costa Rica. In Guatemala, several party leaders later joined ORPA. In El Salvador the Zonal Leadership included Mario López, also known as Comandante Venancio, who was Zonal Secretary; Manuel Federico Castillo; Luis Díaz; Humberto Mendoza; Nidia Díaz; and Joaquín Morales Chávez. The second party congress of PRTC was held in Tegucigalpa in April 1979 and elected Dr. José María Reyes Mata as General Secretary. The congress created a Central American Political Commission with one member from each Central American country. In 1979 the party set up several new mass organizations in El Salvador. The peasant sector of the Liberation League became a separate organization, Brigadas de Trabajadores del Campo (BTC). Other new mass organizations included Brigadas Revolucionarias de Estudiantes de Secundaria and Comités de Base Obrera. The People's Liberation Movement (MLP) was formed as an umbrella body for the party's mass organizations. In January 1980 MLP was among the organizations that founded the Coordinadora Revolucionaria de Masas, an alliance that later founded the Revolutionary Democratic Front in April 1980. Also in 1979 the party launched an armed wing, the Fuerzas Armadas de Liberación Popular. In August 1980 the main PRTC leader in El Salvador, Luis Díaz, disappeared. In October 1980 a PRTC Central Committee meeting was held in Managua. The meeting dissolved the Central American structures of the party, converting the national branches into separate parties. As a continuation of the unified PRTC, it established a Conference of Revolutionary Parties in Central America to act as a coordinating body for the individual PRTC branches. This shift enabled the Salvadoran branch to join the FMLN; the Salvadoran PRTC became a member of the FMLN guerrilla movement on December 5, 1980. The Honduran PRTC continued its own armed struggle.
The Revolutionary Party of Central American Workers (PRTC) was a political party in Central America. Ideology: The group that founded the PRTC was inspired by Marxism-Leninism, Che Guevara, and the Vietnamese national liberation struggle. The party was accused of Trotskyism by other revolutionary groups, an accusation it rejected. History: The PRTC was founded in nineteen seventy five by a sector that had left the ERP-RN after the nineteen seventy two elections. Clandestine pre-congress meetings for the party's founding were held in Costa Rica, Honduras, and El Salvador in nineteen seventy five, and provisional zonal leaderships were formed in these countries. The party also formed cells in Mexico and the United States and established contacts with activists in Belize, Guatemala, Nicaragua, and Panama. In April nineteen seventy five the Liberation League was founded as a multisectoral mass front and became a front organization for the party. In December nineteen seventy five the plenary session of the party congress opened with delegates from around Central America. The PRTC was formally founded in San José on January twenty fifth, nineteen seventy six, with Fabio Castillo Figueroa as general secretary. The party adopted democratic centralism as its organizational principle. During nineteen seventy six–nineteen seventy eight, the party built organizational structures throughout Central America except in Nicaragua; its leadership was based in Costa Rica. In Guatemala, several party leaders later joined ORPA. In El Salvador the Zonal Leadership included Mario López, also known as Comandante Venancio, who was Zonal Secretary; Manuel Federico Castillo; Luis Díaz; Humberto Mendoza; Nidia Díaz; and Joaquín Morales Chávez. The second party congress of PRTC was held in Tegucigalpa in April nineteen seventy-nine and elected Dr. José María Reyes Mata as General Secretary. The congress created a Central American Political Commission with one member from each Central American country. In nineteen seventy-nine the party set up several new mass organizations in El Salvador. The peasant sector of the Liberation League became a separate organization, Brigadas de Trabajadores del Campo (BTC). Other new mass organizations included Brigadas Revolucionarias de Estudiantes de Secundaria and Comités de Base Obrera. The People's Liberation Movement (MLP) was formed as an umbrella body for the party's mass organizations. In January nineteen eighty MLP was among the organizations that founded the Coordinadora Revolucionaria de Masas, an alliance that later founded the Revolutionary Democratic Front in April nineteen eighty. Also in nineteen seventy-nine the party launched an armed wing, the Fuerzas Armadas de Liberación Popular. In August nineteen eighty the main PRTC leader in El Salvador, Luis Díaz, disappeared. In October nineteen eighty a PRTC Central Committee meeting was held in Managua. The meeting dissolved the Central American structures of the party, converting the national branches into separate parties. As a continuation of the unified PRTC, it established a Conference of Revolutionary Parties in Central America to act as a coordinating body for the individual PRTC branches. This shift enabled the Salvadoran branch to join the FMLN; the Salvadoran PRTC became a member of the FMLN guerrilla movement on December five, nineteen eighty. The Honduran PRTC continued its own armed struggle.
long_en_286
wiki_en
562
en
Symphony Space, founded by Isaiah Sheffer and Allan Miller, is a multidisciplinary performing arts organization at 2537 Broadway on the Upper West Side of Manhattan. Performances take place in the 760-seat Peter Jay Sharp Theatre (also called Peter Norton Symphony Space) and the 160-seat Leonard Nimoy Thalia. Programs include music, dance, theater, film, and literary readings. In addition, Symphony Space provides literacy programs and the Curriculum Arts Project, which integrates performing arts into social studies curricula in New York City public schools. Symphony Space traces its beginnings to a free marathon concert, Wall to Wall Bach, held on January 9, 1978, organized by Isaiah Sheffer and Allan Miller. From 1978 to 2001 the theater hosted all New York productions by the New York Gilbert and Sullivan Players. As of 2010, Symphony Space hosts 600 or more events annually, including the annual free music Wall to Wall marathon; Bloomsday on Broadway (celebrating James Joyce's Ulysses); and Selected Shorts, which is broadcast nationally by Public Radio International. The New York company of Revels, Inc., also holds its shows there. Early history of the building: From 1915 to 1917 Vincent Astor spent $750,000 of his personal fortune on the Astor Market, a two-story mini-mall of stands occupying the southwest corner of 95th Street and Broadway. It was intended to sell fruit, meat, fish, produce, and flowers at low prices by achieving economies of scale. As was common with Astor's building projects, flamboyance dominated the architecture, including a 290-foot William Mackay sgraffito frieze depicting farmers bringing their goods to market. The market proved a failure. In 1917 Astor sold the market to Thomas J. Healy. The stalls were demolished, the main space was converted into the Crystal Palace, a skating rink, and the smaller basement area became the Sunken Gardens, a restaurant. Both were eventually turned into movie theaters. The rink became Symphony Theater, and in 1931 the restaurant was turned into the Thalia Theater. Symphony Theater had an undistinguished history and by the 1970s was used for boxing and wrestling. The site was used for Wall to Wall Bach, which led Sheffer and Miller to lease the building and form Symphony Space. The Thalia Theater was built by the experienced theater architect Raymond Irrera and his novice assistant Ben Schlanger. Schlanger introduced numerous innovations, including the "reverse parabolic" design for the floor. After World War II the Thalia gained a reputation as an arty repertory film theater. Its regular patrons included Woody Allen, Peter Bogdanovich, and Martin Scorsese. Woody Allen used it in his film Annie Hall. The Thalia closed in 1987, its future clouded by disputes between Symphony Space and various developers. After Symphony Space prevailed, the Thalia reopened briefly in 1993 and again in 1996. In 1999, Sheffer had the Art Deco interior gutted as unsalvageable, angering some neighborhood preservationists. The interior was used as a staging area for construction of a 22-story apartment building above Symphony Space. Afterwards it was rebuilt as a theater, and in 2002 the space reopened as the Leonard Nimoy Thalia in recognition of the actor's financial support. A sister movie theater, Thalia Soho, operated from 1987 until owner Richard Schwarz's death, then briefly operated as Le Cinematographe and later as the Soho Playhouse. In 2017, the theater and the apartments above were reportedly on the market.
Symphony Space, founded by Isaiah Sheffer and Allan Miller, is a multidisciplinary performing arts organization at two thousand five hundred thirty-seven Broadway on the Upper West Side of Manhattan. Performances take place in the seven hundred sixty-seat Peter Jay Sharp Theatre (also called Peter Norton Symphony Space) and the one hundred sixty-seat Leonard Nimoy Thalia. Programs include music, dance, theater, film, and literary readings. In addition, Symphony Space provides literacy programs and the Curriculum Arts Project, which integrates performing arts into social studies curricula in New York City public schools. Symphony Space traces its beginnings to a free marathon concert, Wall to Wall Bach, held on January ninth, nineteen seventy-eight, organized by Isaiah Sheffer and Allan Miller. From nineteen seventy-eight to two thousand one the theater hosted all New York productions by the New York Gilbert and Sullivan Players. As of two thousand ten, Symphony Space hosts six hundred or more events annually, including the annual free music Wall to Wall marathon; Bloomsday on Broadway (celebrating James Joyce's Ulysses); and Selected Shorts, which is broadcast nationally by Public Radio International. The New York company of Revels, Inc., also holds its shows there. Early history of the building: From nineteen fifteen to nineteen seventeen Vincent Astor spent seven hundred fifty thousand dollars of his personal fortune on the Astor Market, a two-story mini-mall of stands occupying the southwest corner of ninety-fifth Street and Broadway. It was intended to sell fruit, meat, fish, produce, and flowers at low prices by achieving economies of scale. As was common with Astor's building projects, flamboyance dominated the architecture, including a two hundred ninety foot William Mackay sgraffito frieze depicting farmers bringing their goods to market. The market proved a failure. In nineteen seventeen Astor sold the market to Thomas J. Healy. The stalls were demolished, the main space was converted into the Crystal Palace, a skating rink, and the smaller basement area became the Sunken Gardens, a restaurant. Both were eventually turned into movie theaters. The rink became Symphony Theater, and in nineteen thirty one the restaurant was turned into the Thalia Theater. Symphony Theater had an undistinguished history and by the nineteen seventies was used for boxing and wrestling. The site was used for Wall to Wall Bach, which led Sheffer and Miller to lease the building and form Symphony Space. The Thalia Theater was built by the experienced theater architect Raymond Irrera and his novice assistant Ben Schlanger. Schlanger introduced numerous innovations, including the "reverse parabolic" design for the floor. After World War two the Thalia gained a reputation as an arty repertory film theater. Its regular patrons included Woody Allen, Peter Bogdanovich, and Martin Scorsese. Woody Allen used it in his film Annie Hall. The Thalia closed in nineteen eighty seven, its future clouded by disputes between Symphony Space and various developers. After Symphony Space prevailed, the Thalia reopened briefly in nineteen ninety three and again in nineteen ninety six. In nineteen ninety-nine, Sheffer had the Art Deco interior gutted as unsalvageable, angering some neighborhood preservationists. The interior was used as a staging area for construction of a twenty-two-story apartment building above Symphony Space. Afterwards it was rebuilt as a theater, and in two thousand two the space reopened as the Leonard Nimoy Thalia in recognition of the actor's financial support. A sister movie theater, Thalia Soho, operated from nineteen eighty-seven until owner Richard Schwarz's death, then briefly operated as Le Cinematographe and later as the Soho Playhouse. In two thousand seventeen, the theater and the apartments above were reportedly on the market.
long_en_143
paper_en
1,840
en
LoRA: Low-Rank Adaptation of Large Language Models Compared to V1, this draft includes better baselines, experiments on GLUE, and more on adapter latency. Abstract An important paradigm of natural language processing consists of large-scale pre-training on general domain data and adaptation to particular tasks or domains. As we pre-train larger models, full fine-tuning, which retrains all model parameters, becomes less feasible. Using GPT-3 175B as an example -- deploying independent instances of fine-tuned models, each with 175B parameters, is prohibitively expensive. We propose Low-Rank Adaptation, or LoRA, which freezes the pre-trained model weights and injects trainable rank decomposition matrices into each layer of the Transformer architecture, greatly reducing the number of trainable parameters for downstream tasks. Compared to GPT-3 175B fine-tuned with Adam, LoRA can reduce the number of trainable parameters by 10,000 times and the GPU memory requirement by 3 times. LoRA performs on-par or better than fine-tuning in model quality on RoBERTa, DeBERTa, GPT-2, and GPT-3, despite having fewer trainable parameters, a higher training throughput, and, unlike adapters, no additional inference latency. We also provide an empirical investigation into rank-deficiency in language model adaptation, which sheds light on the efficacy of LoRA. We release a package that facilitates the integration of LoRA with PyTorch models and provide our implementations and model checkpoints for RoBERTa, DeBERTa, and GPT-2 at https://github.com/microsoft/LoRA. Introduction Many applications in natural language processing rely on adapting one large-scale, pre-trained language model to multiple downstream applications. Such adaptation is usually done via fine-tuning, which updates all the parameters of the pre-trained model. The major downside of fine-tuning is that the new model contains as many parameters as in the original model. As larger models are trained every few months, this changes from a mere "inconvenience" for GPT-2 or RoBERTa large to a critical deployment challenge for GPT-3 with 175 billion trainable parameters. While GPT-3 175B achieves non-trivial performance with few-shot learning, fine-tuning boosts its performance significantly. Many sought to mitigate this by adapting only some parameters or learning external modules for new tasks. This way, we only need to store and load a small number of task-specific parameters in addition to the pre-trained model for each task, greatly boosting the operational efficiency when deployed. However, existing techniques often introduce inference latency by extending model depth or reduce the model's usable sequence length. More importantly, these method often fail to match the fine-tuning baselines, posing a trade-off between efficiency and model quality. We take inspiration from Li et al. and Aghajanyan et al. which show that the learned over-parametrized models in fact reside on a low intrinsic dimension. We hypothesize that the change in weights during model adaptation also has a low "intrinsic rank", leading to our proposed Low-Rank Adaptation (LoRA) approach. LoRA allows us to train some dense layers in a neural network indirectly by optimizing rank decomposition matrices of the dense layers' change during adaptation instead, while keeping the pre-trained weights frozen. Using GPT-3 175B as an example, we show that a very low rank (i.e., r can be one or two) suffices even when the full rank (i.e., d) is as high as 12,288, making LoRA both storage- and compute-efficient. LoRA possesses several key advantages. A pre-trained model can be shared and used to build many small LoRA modules for different tasks. We can freeze the shared model and efficiently switch tasks by replacing the matrices A and B, reducing the storage requirement and task-switching overhead significantly. LoRA makes training more efficient and lowers the hardware barrier to entry by up to 3 times when using adaptive optimizers since we do not need to calculate the gradients or maintain the optimizer states for most parameters. Instead, we only optimize the injected, much smaller low-rank matrices. Our simple linear design allows us to merge the trainable matrices with the frozen weights when deployed, introducing no inference latency compared to a fully fine-tuned model, by construction. LoRA is orthogonal to many prior methods and can be combined with many of them, such as prefix-tuning. Terminologies and Conventions. We make frequent references to the Transformer architecture and use the conventional terminologies for its dimensions. We call the input and output dimension size of a Transformer layer d model. We use Wq, Wk, Wv, and Wo to refer to the query/key/value/output projection matrices in the self-attention module. W or W0 refers to a pre-trained weight matrix and delta W its accumulated gradient update during adaptation. We use r to denote the rank of a LoRA module. We follow the conventions set out by Vaswani et al. and Brown et al. and use Adam for model optimization and use a Transformer MLP feedforward dimension d ffn equals 4 times d model. Problem Statement While our proposal is agnostic to training objective, we focus on language modeling as our motivating use case. Below is a brief description of the language modeling problem and, in particular, the maximization of conditional probabilities given a task-specific prompt. Suppose we are given a pre-trained autoregressive language model P Phi of y given x parametrized by Phi. For instance, P Phi of y given x can be a generic multi-task learner such as GPT based on the Transformer architecture. Consider adapting this pre-trained model to downstream conditional text generation tasks, such as summarization, machine reading comprehension (MRC), and natural language to SQL (NL2SQL). Each downstream task is represented by a training dataset of context-target pairs: Z equals the set of (xi, yi) from i equals 1 to N, where both xi and yi are sequences of tokens. For example, in NL2SQL, xi is a natural language query and yi its corresponding SQL command; for summarization, xi is the content of an article and yi its summary. During full fine-tuning, the model is initialized to pre-trained weights Phi 0 and updated to Phi 0 plus delta Phi by repeatedly following the gradient to maximize the conditional language modeling objective: the sum over all context-target pairs in Z of the sum from t equals 1 to the length of y of the log of the probability P Phi of yt given x and y less than t. One of the main drawbacks for full fine-tuning is that for each downstream task, we learn a different set of parameters delta Phi whose dimension, the dimension of delta Phi, equals the dimension of Phi 0. Thus, if the pre-trained model is large (such as GPT-3 with the dimension of Phi 0 approximately 175 Billion), storing and deploying many independent instances of fine-tuned models can be challenging, if at all feasible. In this paper, we adopt a more parameter-efficient approach, where the task-specific parameter increment delta Phi equals delta Phi of Theta is further encoded by a much smaller-sized set of parameters Theta with the dimension of Theta much less than the dimension of Phi 0. The task of finding delta Phi thus becomes optimizing over Theta: max over Theta of the sum over all context-target pairs in Z of the sum from t equals 1 to the length of y of the log of the probability p with subscript Phi 0 plus delta Phi of Theta of yt given x and y less than t. In the subsequent sections, we propose to use a low-rank representation to encode delta Phi that is both compute- and memory-efficient. When the pre-trained model is GPT-3 175B, the number of trainable parameters, the dimension of Theta, can be as small as 0.01 percent of the dimension of Phi 0. Aren't Existing Solutions Good Enough? The problem we set out to tackle is by no means new. Since the inception of transfer learning, dozens of works have sought to make model adaptation more parameter- and compute-efficient. Using language modeling as an example, there are two prominent strategies when it comes to efficient adaptations: adding adapter layers or optimizing some forms of the input layer activations. However, both strategies have their limitations, especially in a large-scale and latency-sensitive production scenario. Adapter Layers Introduce Inference Latency. There are many variants of adapters. We focus on the original design by Houlsby et al. which has two adapter layers per Transformer block and a more recent one by Lin et al. which has only one per block but with an additional LayerNorm. While one can reduce the overall latency by pruning layers or exploiting multi-task settings, there is no direct ways to bypass the extra compute in adapter layers. This seems like a non-issue since adapter layers are designed to have few parameters (sometimes less than 1% of the original model) by having a small bottleneck dimension, which limits the FLOPs they can add. However, large neural networks rely on hardware parallelism to keep the latency low, and adapter layers have to be processed sequentially. This makes a difference in the online inference setting where the batch size is typically as small as one. In a generic scenario without model parallelism, such as running inference on GPT-2 medium on a single GPU, we see a noticeable increase in latency when using adapters, even with a very small bottleneck dimension. This problem gets worse when we need to shard the model as done in Shoeybi et al. and Lepikhin et al., because the additional depth requires more synchronous GPU operations such as AllReduce and Broadcast, unless we store the adapter parameters redundantly many times. Directly Optimizing the Prompt is Hard. The other direction, as exemplified by prefix tuning, faces a different challenge. We observe that prefix tuning is difficult to optimize and that its performance changes non-monotonically in trainable parameters, confirming similar observations in the original paper. More fundamentally, reserving a part of the sequence length for adaptation necessarily reduces the sequence length available to process a downstream task, which we suspect makes tuning the prompt less performant compared to other methods. Our Method We describe the simple design of LoRA and its practical benefits. The principles outlined here apply to any dense layers in deep learning models, though we only focus on certain weights in Transformer language models in our experiments as the motivating use case. Low-Rank-Parametrized Update Matrices A neural network contains many dense layers which perform matrix multiplication. The weight matrices in these layers typically have full-rank. When adapting to a specific task, Aghajanyan et al. shows that the pre-trained language models have a low "instrisic dimension" and can still learn efficiently despite a random projection to a smaller subspace. Inspired by this, we hypothesize the updates to the weights also have a low "intrinsic rank" during adaptation.
LoRA: Low-Rank Adaptation of Large Language Models Compared to V one, this draft includes better baselines, experiments on GLUE, and more on adapter latency. Abstract An important paradigm of natural language processing consists of large-scale pre-training on general domain data and adaptation to particular tasks or domains. As we pre-train larger models, full fine-tuning, which retrains all model parameters, becomes less feasible. Using GPT three one hundred seventy-five B as an example -- deploying independent instances of fine-tuned models, each with one hundred seventy-five B parameters, is prohibitively expensive. We propose Low-Rank Adaptation, or LoRA, which freezes the pre-trained model weights and injects trainable rank decomposition matrices into each layer of the Transformer architecture, greatly reducing the number of trainable parameters for downstream tasks. Compared to GPT three one hundred seventy-five B fine-tuned with Adam, LoRA can reduce the number of trainable parameters by ten thousand times and the GPU memory requirement by three times. LoRA performs on-par or better than fine-tuning in model quality on RoBERTa, DeBERTa, GPT two, and GPT three, despite having fewer trainable parameters, a higher training throughput, and, unlike adapters, no additional inference latency. We also provide an empirical investigation into rank-deficiency in language model adaptation, which sheds light on the efficacy of LoRA. We release a package that facilitates the integration of LoRA with PyTorch models and provide our implementations and model checkpoints for RoBERTa, DeBERTa, and GPT two at https colon slash slash github dot com slash microsoft slash LoRA. Introduction Many applications in natural language processing rely on adapting one large-scale, pre-trained language model to multiple downstream applications. Such adaptation is usually done via fine-tuning, which updates all the parameters of the pre-trained model. The major downside of fine-tuning is that the new model contains as many parameters as in the original model. As larger models are trained every few months, this changes from a mere "inconvenience" for GPT two or RoBERTa large to a critical deployment challenge for GPT three with one hundred seventy five billion trainable parameters. While GPT three one hundred seventy five B achieves non-trivial performance with few-shot learning, fine-tuning boosts its performance significantly. Many sought to mitigate this by adapting only some parameters or learning external modules for new tasks. This way, we only need to store and load a small number of task-specific parameters in addition to the pre-trained model for each task, greatly boosting the operational efficiency when deployed. However, existing techniques often introduce inference latency by extending model depth or reduce the model's usable sequence length. More importantly, these method often fail to match the fine-tuning baselines, posing a trade-off between efficiency and model quality. We take inspiration from Li et al. and Aghajanyan et al. which show that the learned over-parametrized models in fact reside on a low intrinsic dimension. We hypothesize that the change in weights during model adaptation also has a low "intrinsic rank", leading to our proposed Low-Rank Adaptation (LoRA) approach. LoRA allows us to train some dense layers in a neural network indirectly by optimizing rank decomposition matrices of the dense layers' change during adaptation instead, while keeping the pre-trained weights frozen. Using GPT three one hundred seventy five B as an example, we show that a very low rank (i.e., r can be one or two) suffices even when the full rank (i.e., d) is as high as twelve thousand two hundred eighty eight, making LoRA both storage- and compute-efficient. LoRA possesses several key advantages. A pre-trained model can be shared and used to build many small LoRA modules for different tasks. We can freeze the shared model and efficiently switch tasks by replacing the matrices A and B, reducing the storage requirement and task-switching overhead significantly. LoRA makes training more efficient and lowers the hardware barrier to entry by up to three times when using adaptive optimizers since we do not need to calculate the gradients or maintain the optimizer states for most parameters. Instead, we only optimize the injected, much smaller low-rank matrices. Our simple linear design allows us to merge the trainable matrices with the frozen weights when deployed, introducing no inference latency compared to a fully fine-tuned model, by construction. LoRA is orthogonal to many prior methods and can be combined with many of them, such as prefix-tuning. Terminologies and Conventions. We make frequent references to the Transformer architecture and use the conventional terminologies for its dimensions. We call the input and output dimension size of a Transformer layer d model. We use W q, W k, W v, and W o to refer to the query/key/value/output projection matrices in the self-attention module. W or W zero refers to a pre-trained weight matrix and delta W its accumulated gradient update during adaptation. We use r to denote the rank of a LoRA module. We follow the conventions set out by Vaswani et al. and Brown et al. and use Adam for model optimization and use a Transformer MLP feedforward dimension d ffn equals four times d model. Problem Statement While our proposal is agnostic to training objective, we focus on language modeling as our motivating use case. Below is a brief description of the language modeling problem and, in particular, the maximization of conditional probabilities given a task-specific prompt. Suppose we are given a pre-trained autoregressive language model P Phi of y given x parametrized by Phi. For instance, P Phi of y given x can be a generic multi-task learner such as GPT based on the Transformer architecture. Consider adapting this pre-trained model to downstream conditional text generation tasks, such as summarization, machine reading comprehension (MRC), and natural language to SQL (NL two SQL). Each downstream task is represented by a training dataset of context-target pairs: Z equals the set of (x i, y i) from i equals one to N, where both x i and y i are sequences of tokens. For example, in NL two SQL, x i is a natural language query and y i its corresponding SQL command; for summarization, x i is the content of an article and y i its summary. During full fine-tuning, the model is initialized to pre-trained weights Phi zero and updated to Phi zero plus delta Phi by repeatedly following the gradient to maximize the conditional language modeling objective: the sum over all context-target pairs in Z of the sum from t equals one to the length of y of the log of the probability P Phi of y t given x and y less than t. One of the main drawbacks for full fine-tuning is that for each downstream task, we learn a different set of parameters delta Phi whose dimension, the dimension of delta Phi, equals the dimension of Phi zero. Thus, if the pre-trained model is large (such as GPT three with the dimension of Phi zero approximately one hundred seventy five Billion), storing and deploying many independent instances of fine-tuned models can be challenging, if at all feasible. In this paper, we adopt a more parameter-efficient approach, where the task-specific parameter increment delta Phi equals delta Phi of Theta is further encoded by a much smaller-sized set of parameters Theta with the dimension of Theta much less than the dimension of Phi zero. The task of finding delta Phi thus becomes optimizing over Theta: max over Theta of the sum over all context-target pairs in Z of the sum from t equals one to the length of y of the log of the probability p Phi zero plus delta Phi of Theta of y t given x and y is less than t. In the subsequent sections, we propose to use a low-rank representation to encode delta Phi that is both compute- and memory-efficient. When the pre-trained model is GPT three one hundred seventy five B, the number of trainable parameters, the dimension of Theta, can be as small as zero point zero one percent of the dimension of Phi zero. Aren't Existing Solutions Good Enough? The problem we set out to tackle is by no means new. Since the inception of transfer learning, dozens of works have sought to make model adaptation more parameter- and compute-efficient. Using language modeling as an example, there are two prominent strategies when it comes to efficient adaptations: adding adapter layers or optimizing some forms of the input layer activations. However, both strategies have their limitations, especially in a large-scale and latency-sensitive production scenario. Adapter Layers Introduce Inference Latency. There are many variants of adapters. We focus on the original design by Houlsby et al. which has two adapter layers per Transformer block and a more recent one by Lin et al. which has only one per block but with an additional LayerNorm. While one can reduce the overall latency by pruning layers or exploiting multi-task settings, there is no direct ways to bypass the extra compute in adapter layers. This seems like a non-issue since adapter layers are designed to have few parameters (sometimes less than one percent of the original model) by having a small bottleneck dimension, which limits the FLOPs they can add. However, large neural networks rely on hardware parallelism to keep the latency low, and adapter layers have to be processed sequentially. This makes a difference in the online inference setting where the batch size is typically as small as one. In a generic scenario without model parallelism, such as running inference on GPT two medium on a single GPU, we see a noticeable increase in latency when using adapters, even with a very small bottleneck dimension. This problem gets worse when we need to shard the model as done in Shoeybi et al. and Lepikhin et al., because the additional depth requires more synchronous GPU operations such as AllReduce and Broadcast, unless we store the adapter parameters redundantly many times. Directly Optimizing the Prompt is Hard. The other direction, as exemplified by prefix tuning, faces a different challenge. We observe that prefix tuning is difficult to optimize and that its performance changes non-monotonically in trainable parameters, confirming similar observations in the original paper. More fundamentally, reserving a part of the sequence length for adaptation necessarily reduces the sequence length available to process a downstream task, which we suspect makes tuning the prompt less performant compared to other methods. Our Method We describe the simple design of LoRA and its practical benefits. The principles outlined here apply to any dense layers in deep learning models, though we only focus on certain weights in Transformer language models in our experiments as the motivating use case. Low-Rank-Parametrized Update Matrices A neural network contains many dense layers which perform matrix multiplication. The weight matrices in these layers typically have full-rank. When adapting to a specific task, Aghajanyan et al. shows that the pre-trained language models have a low "instrisic dimension" and can still learn efficiently despite a random projection to a smaller subspace. Inspired by this, we hypothesize the updates to the weights also have a low "intrinsic rank" during adaptation.
long_en_274
wiki_en
857
en
Endless Space 2 is a turn-based strategy science fiction 4X game developed by Amplitude Studios. The sequel to Endless Space (2012), it was available through Steam Early Access from October 2016 and was released on May 18, 2017, to positive reviews. Gameplay: Endless Space 2 is set in a universe long ago dominated by a race called the Endless. The Endless made great advances in science, culminating in virtualization—uploading their minds into machines and achieving a form of eternal life. This created a schism in their society, splitting it into two opposing factions: the Virtuals, who embraced electronic immortality, and the Concretes, who viewed it as an abomination. The quarrels grew into an open conflict called the Dust Wars, which effectively destroyed their civilization and left only a few scattered survivors. Tens of thousands of years later, the galaxy is again populated by life capable of interstellar travel, scavenging the remains of the Endless empire. The player controls one of 12 major factions, each with asymmetric gameplay, storylines, homeworlds, spaceships, heroes, and technologies. At the start, the player can choose from the 12 predesigned factions or create a custom faction. Players are given control of a fledgling empire and must expand it by conquering systems. Each system has up to five planets, each with its own environment, climate, stats (such as production and food), and sometimes anomalies. Anomalies can be explored with an explorer ship and grant buffs or debuffs to the entire system. Planet stats determine a planet's effectiveness in specific roles, while environments determine whether it can be colonized; the ability to colonize different environments is unlocked through research. Each planet can be assigned a specialization that grants buffs, with additional bonuses based on climate. Players can construct system infrastructure improvements, including several powerful variants that can be built only once per game. Players explore and colonize systems using ships; initially travel is restricted to star lanes, but additional forms of travel can be unlocked via research. Research is key to progression, unlocking new constructions, ship hulls, weapons, modules, upgrades, tactics, infrastructure, abilities, and other items. The game currently has 10 factions, including those added by downloadable content. There are four research categories—military; science and technology; business and trade; and empire development—each with five tiers. Higher tiers are unlocked by researching topics in the previous tier. Politics play an important role. The game features a political system with factions—industrialists, scientists, militarists, pacifists, ecologists, and religious—each with affinities that provide laws granting buffs. The dominant party automatically enacts one law, which grows more potent over time as it moves from established to entrenched. Different government types can modify law effectiveness and the number of parties that can hold office simultaneously. Party support increases by researching certain technologies, building appropriate infrastructure, and performing specific actions (for example, declaring war, building bunkers, and researching weapons all boost militarist support). To expand their empire, players must colonize systems across the galaxy while competing with other empires that are also trying to win. Players can interact with other empires by declaring war, sending tributes, or forming alliances. Each empire controls its own territory and maintains a distinct relationship with the player (Cold War, War, Wary, etc.). There are also minor civilizations; players can improve relations to receive resources, declare war on them, or assimilate them into their empire if relations are good enough. Opposing empires can also interact with these civilizations. To fight other empires, players need ships and ground troops. Ship-to-ship battles against enemy fleets play out automatically: fleets are matched based on weapon and defense modules, and engagement range is determined by the strategy card a player selects. Different battle tactics provide bonuses and alter engagement range; players may retreat to save ships at the cost of some damage. Players can invade by sending ground troops to attack defenders in a system. Ground troops can be upgraded and the player sets their composition percentages. Replenishing troops uses manpower, a special resource, and each ship can carry only a limited number of soldiers. Players can weaken a system’s defenders by stationing fleets in orbit and besieging it, which reduces enemy troop numbers. Players can also design ships. Three hull classes—small, medium, and large—are unlocked via the appropriate research tree. Larger hulls have more health, manpower capacity, and module slots, but require more resources and time to build and occupy more fleet space. Each ship has weapon and support modules; players can equip weapons and support modules that grant buffs. Each weapon has different stats and three possible ranges: short, medium, and long. Accuracy varies by range; poor accuracy results in many missed shots and lower damage. Certain weapons have special properties. For example, kinetic weapons are ineffective at long range but can attack incoming missiles, fighters, and bombers. Beam weapons have relatively low damage output but are unaffected by range, making them very consistent. Development: The game was made available through Steam Early Access on 6 October 2016. The full game was released on 19 May 2017. It received its third major update on 23 March 2018.
Endless Space two is a turn-based strategy science fiction four X game developed by Amplitude Studios. The sequel to Endless Space (two thousand twelve), it was available through Steam Early Access from October two thousand sixteen and was released on May eighteen, two thousand seventeen, to positive reviews. Gameplay: Endless Space two is set in a universe long ago dominated by a race called the Endless. The Endless made great advances in science, culminating in virtualization—uploading their minds into machines and achieving a form of eternal life. This created a schism in their society, splitting it into two opposing factions: the Virtuals, who embraced electronic immortality, and the Concretes, who viewed it as an abomination. The quarrels grew into an open conflict called the Dust Wars, which effectively destroyed their civilization and left only a few scattered survivors. Tens of thousands of years later, the galaxy is again populated by life capable of interstellar travel, scavenging the remains of the Endless empire. The player controls one of twelve major factions, each with asymmetric gameplay, storylines, homeworlds, spaceships, heroes, and technologies. At the start, the player can choose from the twelve predesigned factions or create a custom faction. Players are given control of a fledgling empire and must expand it by conquering systems. Each system has up to five planets, each with its own environment, climate, stats (such as production and food), and sometimes anomalies. Anomalies can be explored with an explorer ship and grant buffs or debuffs to the entire system. Planet stats determine a planet's effectiveness in specific roles, while environments determine whether it can be colonized; the ability to colonize different environments is unlocked through research. Each planet can be assigned a specialization that grants buffs, with additional bonuses based on climate. Players can construct system infrastructure improvements, including several powerful variants that can be built only once per game. Players explore and colonize systems using ships; initially travel is restricted to star lanes, but additional forms of travel can be unlocked via research. Research is key to progression, unlocking new constructions, ship hulls, weapons, modules, upgrades, tactics, infrastructure, abilities, and other items. The game currently has ten factions, including those added by downloadable content. There are four research categories—military; science and technology; business and trade; and empire development—each with five tiers. Higher tiers are unlocked by researching topics in the previous tier. Politics play an important role. The game features a political system with factions—industrialists, scientists, militarists, pacifists, ecologists, and religious—each with affinities that provide laws granting buffs. The dominant party automatically enacts one law, which grows more potent over time as it moves from established to entrenched. Different government types can modify law effectiveness and the number of parties that can hold office simultaneously. Party support increases by researching certain technologies, building appropriate infrastructure, and performing specific actions (for example, declaring war, building bunkers, and researching weapons all boost militarist support). To expand their empire, players must colonize systems across the galaxy while competing with other empires that are also trying to win. Players can interact with other empires by declaring war, sending tributes, or forming alliances. Each empire controls its own territory and maintains a distinct relationship with the player (Cold War, War, Wary, etc.). There are also minor civilizations; players can improve relations to receive resources, declare war on them, or assimilate them into their empire if relations are good enough. Opposing empires can also interact with these civilizations. To fight other empires, players need ships and ground troops. Ship-to-ship battles against enemy fleets play out automatically: fleets are matched based on weapon and defense modules, and engagement range is determined by the strategy card a player selects. Different battle tactics provide bonuses and alter engagement range; players may retreat to save ships at the cost of some damage. Players can invade by sending ground troops to attack defenders in a system. Ground troops can be upgraded and the player sets their composition percentages. Replenishing troops uses manpower, a special resource, and each ship can carry only a limited number of soldiers. Players can weaken a system’s defenders by stationing fleets in orbit and besieging it, which reduces enemy troop numbers. Players can also design ships. Three hull classes—small, medium, and large—are unlocked via the appropriate research tree. Larger hulls have more health, manpower capacity, and module slots, but require more resources and time to build and occupy more fleet space. Each ship has weapon and support modules; players can equip weapons and support modules that grant buffs. Each weapon has different stats and three possible ranges: short, medium, and long. Accuracy varies by range; poor accuracy results in many missed shots and lower damage. Certain weapons have special properties. For example, kinetic weapons are ineffective at long range but can attack incoming missiles, fighters, and bombers. Beam weapons have relatively low damage output but are unaffected by range, making them very consistent. Development: The game was made available through Steam Early Access on six October two thousand sixteen. The full game was released on nineteen May two thousand seventeen. It received its third major update on twenty three March two thousand eighteen.
long_en_263
wiki_en
690
en
The issue of environmentalism in motorsport concerns all of auto racing and efforts to reduce carbon dioxide emissions that contribute to global warming. Early developments saw several series and teams exploring greener technologies. The International Formula Master series planned to use a petrol–electric hybrid and regenerative braking for the 2007 season, but later announced it would not use the hybrid system and instead opted for regular fuel. Audi's diesel-powered R10 won the American Le Mans Series by a margin of almost 100 points over its nearest rivals. In the United Kingdom, British Touring Car Championship team West Surrey Racing (WSR) ran Rob Collard and Colin Turkington in ethanol-fueled MG ZSs. New championships have been created with environmental goals central to their mission. In 2012, the Fédération Internationale de l'Automobile (FIA) announced it would govern a new fully electric single-seater championship, the FIA Formula E Championship, which began in September 2014. The second generation of Formula E cars was introduced at the start of the 2018–19 season. The series features multiple manufacturers, including Jaguar, Audi, Nissan, BMW, and Mahindra. Extreme E is an international off-road racing series that places environmentalism at the forefront. The series features drivers racing the all-electric SUV Spark Odyssey 21 in remote locations chosen to highlight climate change. It also adopts legacy projects that provide environmental and social support. To reduce the carbon emissions associated with air freight, the RMS St Helena was purchased and refitted to transport all equipment and cars to each location; it also houses a laboratory for climate science research conducted en route. In other series, Formula One's governing body, the FIA, did not address the sport's environmental impact until May 2007, when it held a discussion in Monaco during the Grand Prix. Three months earlier, the Honda Works Team had announced it would run a sponsorless car for the 2007 season. The Honda RA107 featured a livery depicting the Earth, symbolizing Honda's environmental commitment, and displayed only the Honda 'H' and Bridgestone logos. The car mostly received a cynical reception. Red Bull Racing's Mark Webber observed, "It's good Honda is going green — but there are still 35 private jets parked 20 kilometres down the road." Briggs commented, "Honda's 'Earth Car' may have attracted cynicism, but the issues it highlights are moving up motorsport's agenda." In 2013, the FIA made plans to switch from V8 engines to turbocharged V6 engines, after the previous downgrade from V10 engines to V8 engines in 2006. In 2009, the sport introduced kinetic energy recovery systems; following a brief ban, these systems were reintroduced in 2011, and the technical regulations were further revised for 2014. In North American motorsport, IndyCar was the first major open-wheel series to reduce its greenhouse gas emissions by switching its fuel from methanol to the more environmentally friendly ethanol for the 2007 season. In sportscar racing, both the Le Mans Series and the American Le Mans Series have made efforts to be more environmentally friendly. Audi and Peugeot produced diesel race cars—the R10 TDI and 908 HDi FAP, respectively—and both competed in the 2007 24 Hours of Le Mans, where the German manufacturer came out on top. In the British GT Championship, a diesel-powered Aston Martin DBRS9 made series history by winning at the Snetterton round. Since 2011, the top class of sports car racing, LMP1, has featured hybrid powertrains, with entries from manufacturers such as Audi, Toyota (2012), Porsche (2014) and Nissan (at Le Mans in 2014). In 2012 Drayson Racing introduced the Lola B12/69E, a modified Lola prototype chassis, and set an electric car world record in 2013. In 2019 the ACO announced Mission H24 to bring a hydrogen-powered racing car to the 24 Hours of Le Mans in 2024. The Volkswagen I.D. R set several records between 2018 and 2020. In touring car and European open-wheel racing, alongside WSR's efforts, Paul O'Neill entered a privateer BTCC entry, racing a bioethanol-powered Vauxhall Astra at the Brands Hatch meeting during the 2006 season.
The issue of environmentalism in motorsport concerns all of auto racing and efforts to reduce carbon dioxide emissions that contribute to global warming. Early developments saw several series and teams exploring greener technologies. The International Formula Master series planned to use a petrol–electric hybrid and regenerative braking for the two thousand seven season, but later announced it would not use the hybrid system and instead opted for regular fuel. Audi's diesel-powered R ten won the American Le Mans Series by a margin of almost one hundred points over its nearest rivals. In the United Kingdom, British Touring Car Championship team West Surrey Racing (WSR) ran Rob Collard and Colin Turkington in ethanol-fueled MG ZSs. New championships have been created with environmental goals central to their mission. In two thousand twelve, the Fédération Internationale de l'Automobile (FIA) announced it would govern a new fully electric single-seater championship, the FIA Formula E Championship, which began in September two thousand fourteen. The second generation of Formula E cars was introduced at the start of the two thousand eighteen–nineteen season. The series features multiple manufacturers, including Jaguar, Audi, Nissan, BMW, and Mahindra. Extreme E is an international off-road racing series that places environmentalism at the forefront. The series features drivers racing the all-electric SUV Spark Odyssey twenty one in remote locations chosen to highlight climate change. It also adopts legacy projects that provide environmental and social support. To reduce the carbon emissions associated with air freight, the RMS St Helena was purchased and refitted to transport all equipment and cars to each location; it also houses a laboratory for climate science research conducted en route. In other series, Formula One's governing body, the FIA, did not address the sport's environmental impact until May two thousand seven, when it held a discussion in Monaco during the Grand Prix. Three months earlier, the Honda Works Team had announced it would run a sponsorless car for the two thousand seven season. The Honda RA one hundred seven featured a livery depicting the Earth, symbolizing Honda's environmental commitment, and displayed only the Honda 'H' and Bridgestone logos. The car mostly received a cynical reception. Red Bull Racing's Mark Webber observed, "It's good Honda is going green — but there are still thirty-five private jets parked twenty kilometres down the road." Briggs commented, "Honda's 'Earth Car' may have attracted cynicism, but the issues it highlights are moving up motorsport's agenda." In two thousand thirteen, the FIA made plans to switch from V eight engines to turbocharged V six engines, after the previous downgrade from V ten engines to V eight engines in two thousand six. In two thousand nine, the sport introduced kinetic energy recovery systems; following a brief ban, these systems were reintroduced in two thousand eleven, and the technical regulations were further revised for two thousand fourteen. In North American motorsport, IndyCar was the first major open-wheel series to reduce its greenhouse gas emissions by switching its fuel from methanol to the more environmentally friendly ethanol for the two thousand seven season. In sportscar racing, both the Le Mans Series and the American Le Mans Series have made efforts to be more environmentally friendly. Audi and Peugeot produced diesel race cars—the R ten TDI and nine hundred eight HDi FAP, respectively—and both competed in the two thousand seven twenty-four Hours of Le Mans, where the German manufacturer came out on top. In the British GT Championship, a diesel-powered Aston Martin DBRS nine made series history by winning at the Snetterton round. Since two thousand eleven, the top class of sports car racing, LMP one, has featured hybrid powertrains, with entries from manufacturers such as Audi, Toyota (two thousand twelve), Porsche (two thousand fourteen) and Nissan (at Le Mans in two thousand fourteen). In two thousand twelve Drayson Racing introduced the Lola B twelve sixty nine E, a modified Lola prototype chassis, and set an electric car world record in two thousand thirteen. In two thousand nineteen the ACO announced Mission H twenty-four to bring a hydrogen-powered racing car to the twenty-four Hours of Le Mans in two thousand twenty-four. The Volkswagen I.D. R set several records between two thousand eighteen and two thousand twenty. In touring car and European open-wheel racing, alongside WSR's efforts, Paul O'Neill entered a privateer BTCC entry, racing a bioethanol-powered Vauxhall Astra at the Brands Hatch meeting during the two thousand six season.
long_en_251
news_en
600
en
Real estate website RENTCafé analyzed the 2000 U.S. Census and the 2016 American Community Survey to identify changes in key metrics for 11,000 U.S. ZIP codes and produced a list of America's top 10 most gentrified areas. The winner was downtown Los Angeles; runners-up include trendy neighborhoods in Washington, D.C., Fort Worth, Houston, Philadelphia, and New York City. To assess gentrification, researchers considered significant increases in median home value and median household income that long-time residents had to contend with, as well as an influx of newcomers holding a bachelor's degree or higher. "We ranked the ZIP codes on all three scales and created an average ranking to determine which areas experienced gentrification from 2000 to 2016," the researchers said. The study focused only on ZIP codes that had more than 2,000 occupied housing units in both 2000 and 2016. All dollar amounts were adjusted for inflation to 2018 dollars using the Bureau of Labor Statistics' Consumer Price Index Inflation Calculator. Of the top 20 ZIP codes deemed most gentrified, here are the top 10: Los Angeles, California ZIP code: 90014 Home value change: 707% Household income change: 95% Higher education change: 857% Washington, District of Columbia ZIP code: 20001 Home value change: 207% Household income change: 163% Higher education change: 212% Cities 7:55 PM ET, Tue, 15 Nov 2016 | 01:13 Houston, Texas — ZIP code: 77003 Home value change: 284% Household income change: 71% Higher education change: 443% Philadelphia, Pennsylvania — ZIP code: 19123 Home value change: 203% Household income change: 95% Higher education change: 230% New York, New York — ZIP code: 10039 Home value change: 356% Household income change: 32% Higher education change: 168% Location alone won't determine if your home is a great investment, but this will 9:20 AM ET, Tue, 21 March 2017 | 01:05 Fort Worth, Texas — ZIP code: 76102 Home value change: 323% Household income change: 103% Higher education change: 122% Brooklyn, New York — ZIP code: 11211 Home value change: 167% Household income change: 79% Higher education change: 95% Philadelphia, Pennsylvania — ZIP code: 19146 Home value change: 404% Household income change: 51% Higher education change: 106% How much more it costs to own vs. Rent in Your State Brooklyn, New York (ZIP 11222): home value change 116%; household income change 56%; higher education change 97%. Brooklyn, New York (ZIP 11216): home value change 194%; household income change 48%; higher education change 149%. Although gentrification is changing most major cities, "the really spectacular cases seem to be limited to relatively few," the study notes. More than half of the top 10 ZIP codes are located on the East Coast and are concentrated in a handful of popular cities. Overall, the data show that the average home value in 2016 for the top 20 most gentrified ZIP codes was $446,730, with an average increase of at least 224% since 2000. "This happened amidst a wave of supply growth: 19 of the top 20 gentrified ZIP codes experienced increases in the number of households between 2000 and 2016." Median household income in the Washington, D.C., ZIP code 20001 grew 163%, from $37,000 in 2000 to almost $97,000 in 2016. Still, it's worth noting that in LA's 90014 ZIP code most residents continue to live under the poverty line: despite 95% income growth driven by well-educated, high-earning newcomers, the median household income in 2016 was only $24,670. Overall, researchers conclude that gentrification in popular urban areas remains a major issue and could continue to displace long-time residents.
Real estate website RENTCafé analyzed the two thousand U.S. Census and the two thousand sixteen American Community Survey to identify changes in key metrics for eleven thousand U.S. ZIP codes and produced a list of America's top ten most gentrified areas. The winner was downtown Los Angeles; runners-up include trendy neighborhoods in Washington, D.C., Fort Worth, Houston, Philadelphia, and New York City. To assess gentrification, researchers considered significant increases in median home value and median household income that long-time residents had to contend with, as well as an influx of newcomers holding a bachelor's degree or higher. "We ranked the ZIP codes on all three scales and created an average ranking to determine which areas experienced gentrification from two thousand to two thousand sixteen," the researchers said. The study focused only on ZIP codes that had more than two thousand occupied housing units in both two thousand and two thousand sixteen. All dollar amounts were adjusted for inflation to two thousand eighteen dollars using the Bureau of Labor Statistics' Consumer Price Index Inflation Calculator. Of the top twenty ZIP codes deemed most gentrified, here are the top ten: Los Angeles, California ZIP code: nine zero zero one four Home value change: seven hundred seven percent Household income change: ninety five percent Higher education change: eight hundred fifty seven percent Washington, District of Columbia ZIP code: two zero zero zero one Home value change: two hundred seven percent Household income change: one hundred sixty three percent Higher education change: two hundred twelve percent Cities 7:55 PM ET, Tue, fifteen Nov two thousand sixteen | zero one: thirteen Houston, Texas — ZIP code: seven seven zero zero three Home value change: two hundred eighty four percent Household income change: seventy one percent Higher education change: four hundred forty three percent Philadelphia, Pennsylvania — ZIP code: one nine one two three Home value change: two hundred three percent Household income change: ninety five percent Higher education change: two hundred thirty percent New York, New York — ZIP code: one zero zero three nine Home value change: three hundred fifty six percent Household income change: thirty two percent Higher education change: one hundred sixty eight percent Location alone won't determine if your home is a great investment, but this will 9:20 AM ET, Tue, twenty one March two thousand seventeen | zero one: zero five Fort Worth, Texas — ZIP code: seven six one zero two Home value change: three hundred twenty three percent Household income change: one hundred three percent Higher education change: one hundred twenty two percent Brooklyn, New York — ZIP code: one one two one one Home value change: one hundred sixty seven percent Household income change: seventy nine percent Higher education change: ninety five percent Philadelphia, Pennsylvania — ZIP code: one nine one four six Home value change: four hundred four percent Household income change: fifty one percent Higher education change: one hundred six percent How much more it costs to own vs. Rent in Your State Brooklyn, New York (ZIP eleven thousand two hundred twenty-two): home value change one hundred sixteen percent; household income change fifty-six percent; higher education change ninety-seven percent. Brooklyn, New York (ZIP eleven thousand two hundred sixteen): home value change one hundred ninety-four percent; household income change forty-eight percent; higher education change one hundred forty-nine percent. Although gentrification is changing most major cities, "the really spectacular cases seem to be limited to relatively few," the study notes. More than half of the top ten ZIP codes are located on the East Coast and are concentrated in a handful of popular cities. Overall, the data show that the average home value in two thousand sixteen for the top twenty most gentrified ZIP codes was four hundred forty-six thousand seven hundred thirty dollars, with an average increase of at least two hundred twenty-four percent since two thousand. "This happened amidst a wave of supply growth: nineteen of the top twenty gentrified ZIP codes experienced increases in the number of households between two thousand and two thousand sixteen." Median household income in the Washington, D.C., ZIP code twenty thousand one grew one hundred sixty-three percent, from thirty-seven thousand dollars in two thousand to almost ninety-seven thousand dollars in two thousand sixteen. Still, it's worth noting that in LA's ninety thousand fourteen ZIP code most residents continue to live under the poverty line: despite ninety-five percent income growth driven by well-educated, high-earning newcomers, the median household income in two thousand sixteen was only twenty-four thousand six hundred seventy dollars. Overall, researchers conclude that gentrification in popular urban areas remains a major issue and could continue to displace long-time residents.
long_en_202
news_en
904
en
A train wreck that spilled Boeing 737 fuselages has become a destination for curious rafters on the Lower Clark Fork River in Montana. It could be Tuesday before the giant fuselages are removed from the river, as hot weather and tough terrain have slowed cleanup after a Thursday derailment that tossed three fuselages into the water. The wreck has turned into a scenic stop along a popular rafting route where the wide canyon slows and lets people get a good look. "It's kind of a surreal thing that comes around the corner. You would never expect it," said Joshua Flanagan, owner of Spokane-based rafting company Wiley E. Waters. The plane parts were en route to Boeing's Renton, Wash., plant from Wichita, Kan., when 19 Montana Rail Link freight cars derailed and lost the expensive cargo west of Missoula. There were six fuselages in total, but the others luckily fell near the tracks above. Crews reopened the railroad line Saturday night, but recovering the large parts from the river is "going considerably slower than we hoped," Montana Rail Link spokeswoman Lynda Frost told the Daily News. "By day’s end, we will be lucky to get one up from the river." It’s still not clear what caused the derailment; it is under investigation. The rail company had another derailment in September 2013 about 20 miles away, when 23 cars derailed while carrying cargo such as sugar and wood chips, according to KECI-TV. A crew of about 50 is working to remove the parts from the river. They brought heavy equipment in from Washington and Montana to hoist them up, but the steep embankment complicates recovery. The train was also carrying soybeans and denatured alcohol, according to the Missoulian, but none of that spilled. Frost added that none of the Boeing parts are toxic to the river, and they are not a danger to rafters. The Forest Service is allowing rafters to float the river as long as they keep to one side of the waterway. Flanagan’s rafting business is fielding more calls from customers who want to see the fuselages before crews remove them. "I think people are really anxious to see this unique sight," Flanagan said. The fuselages are visible just before the last rapid on Flanagan’s route. nhensley@nydailynews.com. Follow me on Twitter: @nkhensley Montana Rail Link crews on Sunday began cleaning up debris from an embankment below the site of last week’s train derailment near Fish Creek, including three aircraft fuselages that went into the Clark Fork River. MRL had cleared the area around the tracks and reopened the rail line to train traffic Saturday. Company spokeswoman Lynda Frost said work started about 7 a.m. Sunday to pull the Boeing Co. 737 fuselages and train cars out of the water and up the embankment. MRL is using eight pieces of heavy machinery to pull the fuselages up the hill with cables. The cleanup, originally expected to be completed by the end of Sunday or early Monday, will likely continue through Tuesday, Frost said. “It’s taking longer than we had originally anticipated,” she said. MRL stopped train traffic on the line for about 12 hours Sunday during the cleanup, and similar shutdowns are likely as work proceeds. Nineteen train cars derailed about 10 miles west of Alberton at 4 p.m. Thursday. Thirteen cars were carrying aircraft components, soybeans and denatured alcohol, and six were empty. While no alcohol leaked and no soybeans spilled, three cars with 737 fuselages went into the river, Frost said. No one was injured in the derailment, the cause of which remains under investigation. Boeing spokeswoman Lynn Steinberg said the seven cars of aircraft components included six 737 fuselages, as well as parts for 747s and 777s. The components were being transported from a production facility in Wichita, Kansas, to a Boeing assembly plant in Renton, Washington. Steinberg declined to give a value for the aircraft components. A Boeing team was on the scene Friday to determine the status of the components and work with MRL to retrieve them; it had not yet issued a full evaluation. Until that evaluation is completed, Steinberg said, it is unknown whether any of the parts will still be usable. Montana Fish, Wildlife and Parks said there are no plans to close the Clark Fork during the cleanup, but brief delays could occur in the area when the 737 fuselages and other parts are moved. One such delay occurred about 1 p.m. Sunday, when floaters were held up about half an hour, FWP spokeswoman Christine Oschell said. "It's possible there will be other brief delays on the river," she added. MRL has been working with FWP and the Whitewater Rescue Institute to keep the river open during the cleanup; members of the institute are on the river informing floaters of what happened and watching for hazards, and they will remain there throughout the cleanup. "We intend to keep the river open unless we feel that there is a safety concern," Oschell said. Zoo Town Surfers owner Jason Shreder said he was upset by FWP's decision Friday to close the Clark Fork in the area of the derailment — a decision the agency reversed only a few hours later. Especially since I had seen the adequate safety measures they put in place there, it's good they kept it open.
A train wreck that spilled Boeing seven hundred thirty-seven fuselages has become a destination for curious rafters on the Lower Clark Fork River in Montana. It could be Tuesday before the giant fuselages are removed from the river, as hot weather and tough terrain have slowed cleanup after a Thursday derailment that tossed three fuselages into the water. The wreck has turned into a scenic stop along a popular rafting route where the wide canyon slows and lets people get a good look. "It's kind of a surreal thing that comes around the corner. You would never expect it," said Joshua Flanagan, owner of Spokane-based rafting company Wiley E. Waters. The plane parts were en route to Boeing's Renton, Wash., plant from Wichita, Kan., when nineteen Montana Rail Link freight cars derailed and lost the expensive cargo west of Missoula. There were six fuselages in total, but the others luckily fell near the tracks above. Crews reopened the railroad line Saturday night, but recovering the large parts from the river is "going considerably slower than we hoped," Montana Rail Link spokeswoman Lynda Frost told the Daily News. "By day’s end, we will be lucky to get one up from the river." It’s still not clear what caused the derailment; it is under investigation. The rail company had another derailment in September two thousand thirteen about twenty miles away, when twenty-three cars derailed while carrying cargo such as sugar and wood chips, according to KECI-TV. A crew of about fifty is working to remove the parts from the river. They brought heavy equipment in from Washington and Montana to hoist them up, but the steep embankment complicates recovery. The train was also carrying soybeans and denatured alcohol, according to the Missoulian, but none of that spilled. Frost added that none of the Boeing parts are toxic to the river, and they are not a danger to rafters. The Forest Service is allowing rafters to float the river as long as they keep to one side of the waterway. Flanagan’s rafting business is fielding more calls from customers who want to see the fuselages before crews remove them. "I think people are really anxious to see this unique sight," Flanagan said. The fuselages are visible just before the last rapid on Flanagan’s route. nhensley at nydailynews dot com. Follow me on Twitter: at nkhensley Montana Rail Link crews on Sunday began cleaning up debris from an embankment below the site of last week’s train derailment near Fish Creek, including three aircraft fuselages that went into the Clark Fork River. MRL had cleared the area around the tracks and reopened the rail line to train traffic Saturday. Company spokeswoman Lynda Frost said work started about seven a.m. Sunday to pull the Boeing Co. seven hundred thirty-seven fuselages and train cars out of the water and up the embankment. MRL is using eight pieces of heavy machinery to pull the fuselages up the hill with cables. The cleanup, originally expected to be completed by the end of Sunday or early Monday, will likely continue through Tuesday, Frost said. “It’s taking longer than we had originally anticipated,” she said. MRL stopped train traffic on the line for about twelve hours Sunday during the cleanup, and similar shutdowns are likely as work proceeds. Nineteen train cars derailed about ten miles west of Alberton at four p.m. Thursday. Thirteen cars were carrying aircraft components, soybeans and denatured alcohol, and six were empty. While no alcohol leaked and no soybeans spilled, three cars with seven thirty-seven fuselages went into the river, Frost said. No one was injured in the derailment, the cause of which remains under investigation. Boeing spokeswoman Lynn Steinberg said the seven cars of aircraft components included six seven thirty-seven fuselages, as well as parts for seven forty-sevens and seven seventy-sevens. The components were being transported from a production facility in Wichita, Kansas, to a Boeing assembly plant in Renton, Washington. Steinberg declined to give a value for the aircraft components. A Boeing team was on the scene Friday to determine the status of the components and work with MRL to retrieve them; it had not yet issued a full evaluation. Until that evaluation is completed, Steinberg said, it is unknown whether any of the parts will still be usable. Montana Fish, Wildlife and Parks said there are no plans to close the Clark Fork during the cleanup, but brief delays could occur in the area when the seven thirty-seven fuselages and other parts are moved. One such delay occurred about one p.m. Sunday, when floaters were held up about half an hour, FWP spokeswoman Christine Oschell said. "It's possible there will be other brief delays on the river," she added. MRL has been working with FWP and the Whitewater Rescue Institute to keep the river open during the cleanup; members of the institute are on the river informing floaters of what happened and watching for hazards, and they will remain there throughout the cleanup. "We intend to keep the river open unless we feel that there is a safety concern," Oschell said. Zoo Town Surfers owner Jason Shreder said he was upset by FWP's decision Friday to close the Clark Fork in the area of the derailment — a decision the agency reversed only a few hours later. Especially since I had seen the adequate safety measures they put in place there, it's good they kept it open.
long_en_291
wiki_en
658
en
The asymptotic safety approach to quantum gravity offers a nonperturbative renormalization framework for constructing a consistent, predictive quantum field theory of gravity and spacetime geometry. It is based on a nontrivial fixed point of the renormalization group (RG) flow: the running coupling constants approach this fixed point in the ultraviolet (UV) limit, preventing divergences in physical observables. It also has predictive power: only a subset of initial coupling configurations at some RG scale flow into the fixed point as the scale increases, so if certain couplings are measured experimentally, asymptotic safety can fix the remaining ones to ensure the UV fixed point is reached. If realized in nature, asymptotic safety would have far-reaching consequences wherever quantum gravitational effects matter, but exploration of these implications—so far including phenomenological studies in particle physics, astrophysics, and cosmology—remains in its infancy. Asymptotic safety and the parameters of the Standard Model The mass of the Higgs boson The Standard Model in combination with asymptotic safety might be valid up to arbitrarily high energies. Based on this assumption, it is possible to make a statement about the Higgs boson mass. The first concrete results were obtained by Shaposhnikov and Wetterich in 2010. Depending on the sign of the gravity-induced anomalous dimension there are two possibilities. For one sign, the Higgs mass is restricted to a narrow window. If, on the other hand, the other sign holds—which is the favored possibility—the Higgs mass must take a specific value with an uncertainty of only a few GeV. In this spirit one can consider a prediction of asymptotic safety. The result is in surprisingly good agreement with the experimental data measured at CERN in 2013 by the ATLAS and CMS collaborations, which determined a value consistent with the prediction. The fine-structure constant By taking into account the gravitational correction to the running of the fine-structure constant in quantum electrodynamics, Harst and Reuter studied the impact of asymptotic safety on the infrared (renormalized) value of the constant. They found two fixed points suitable for the asymptotic safety construction, both of which imply a well-behaved UV limit without a Landau-pole-type singularity. The first fixed point is characterized by a vanishing gauge coupling, in which case the infrared value is a free parameter. In the second case, however, the fixed-point value is non-zero, and its infrared value is a computable prediction of the theory. In a more recent study, Christiansen and Eichhorn showed that quantum fluctuations of gravity generically generate self-interactions for gauge theories, which must be included in a discussion of a potential ultraviolet completion. Depending on the gravitational and gauge parameters, they conclude that the fine-structure constant might be asymptotically free and not run into a Landau pole, while the induced coupling for the gauge self-interaction is irrelevant and thus its value can be predicted. This is an explicit example where asymptotic safety solves a problem of the Standard Model—the triviality of the U(1) sector—without introducing new free parameters. Phenomenological consequences of asymptotic safety can also be expected in astrophysics and cosmology. Bonanno and Reuter investigated the horizon structure of "renormalization group improved" black holes and computed quantum gravity corrections to the Hawking temperature and the corresponding thermodynamic entropy. Using an RG improvement of the Einstein–Hilbert action, Reuter and Weyer obtained a modified form of the Einstein equations that alters the Newtonian limit, potentially explaining flat galaxy rotation curves without postulating dark matter. In cosmology, Bonanno and Reuter argued that asymptotic safety changes the very early Universe, possibly resolving the horizon and flatness problems of standard cosmology. Moreover, asymptotic safety could drive inflation without an inflaton field, with the cosmological constant providing the driving term. The scale invariance associated with the non-Gaussian fixed point underlying asymptotic safety may account for the near scale invariance of primordial density perturbations. Using different methods, Weinberg further analyzed asymptotically safe inflation.
The asymptotic safety approach to quantum gravity offers a nonperturbative renormalization framework for constructing a consistent, predictive quantum field theory of gravity and spacetime geometry. It is based on a nontrivial fixed point of the renormalization group (RG) flow: the running coupling constants approach this fixed point in the ultraviolet (UV) limit, preventing divergences in physical observables. It also has predictive power: only a subset of initial coupling configurations at some RG scale flow into the fixed point as the scale increases, so if certain couplings are measured experimentally, asymptotic safety can fix the remaining ones to ensure the UV fixed point is reached. If realized in nature, asymptotic safety would have far-reaching consequences wherever quantum gravitational effects matter, but exploration of these implications—so far including phenomenological studies in particle physics, astrophysics, and cosmology—remains in its infancy. Asymptotic safety and the parameters of the Standard Model The mass of the Higgs boson The Standard Model in combination with asymptotic safety might be valid up to arbitrarily high energies. Based on this assumption, it is possible to make a statement about the Higgs boson mass. The first concrete results were obtained by Shaposhnikov and Wetterich in two thousand ten. Depending on the sign of the gravity-induced anomalous dimension there are two possibilities. For one sign, the Higgs mass is restricted to a narrow window. If, on the other hand, the other sign holds—which is the favored possibility—the Higgs mass must take a specific value with an uncertainty of only a few GeV. In this spirit one can consider a prediction of asymptotic safety. The result is in surprisingly good agreement with the experimental data measured at CERN in two thousand thirteen by the ATLAS and CMS collaborations, which determined a value consistent with the prediction. The fine-structure constant By taking into account the gravitational correction to the running of the fine-structure constant in quantum electrodynamics, Harst and Reuter studied the impact of asymptotic safety on the infrared (renormalized) value of the constant. They found two fixed points suitable for the asymptotic safety construction, both of which imply a well-behaved UV limit without a Landau-pole-type singularity. The first fixed point is characterized by a vanishing gauge coupling, in which case the infrared value is a free parameter. In the second case, however, the fixed-point value is non-zero, and its infrared value is a computable prediction of the theory. In a more recent study, Christiansen and Eichhorn showed that quantum fluctuations of gravity generically generate self-interactions for gauge theories, which must be included in a discussion of a potential ultraviolet completion. Depending on the gravitational and gauge parameters, they conclude that the fine-structure constant might be asymptotically free and not run into a Landau pole, while the induced coupling for the gauge self-interaction is irrelevant and thus its value can be predicted. This is an explicit example where asymptotic safety solves a problem of the Standard Model—the triviality of the U one sector—without introducing new free parameters. Phenomenological consequences of asymptotic safety can also be expected in astrophysics and cosmology. Bonanno and Reuter investigated the horizon structure of "renormalization group improved" black holes and computed quantum gravity corrections to the Hawking temperature and the corresponding thermodynamic entropy. Using an RG improvement of the Einstein–Hilbert action, Reuter and Weyer obtained a modified form of the Einstein equations that alters the Newtonian limit, potentially explaining flat galaxy rotation curves without postulating dark matter. In cosmology, Bonanno and Reuter argued that asymptotic safety changes the very early Universe, possibly resolving the horizon and flatness problems of standard cosmology. Moreover, asymptotic safety could drive inflation without an inflaton field, with the cosmological constant providing the driving term. The scale invariance associated with the non-Gaussian fixed point underlying asymptotic safety may account for the near scale invariance of primordial density perturbations. Using different methods, Weinberg further analyzed asymptotically safe inflation.
long_en_319
wiki_en
516
en
In ecology, local abundance is the relative representation of a species in a particular ecosystem, usually measured as the number of individuals found per sample. The ratio of the abundance of one species to that of other species in the same ecosystem is referred to as relative species abundance. Both indicators are relevant for computing biodiversity. A variety of sampling methods are used to measure abundance. For larger animals these include spotlight counts, track counts, roadkill counts, and observations at monitoring stations. In many plant communities, abundance is measured by plant cover—the relative area occupied by different species in a small plot. Abundance can also be measured by identifying and counting every individual of every species in a given sector, although species distributions are often skewed so that a few species account for most individuals. Relative species abundance is calculated by dividing the number of individuals of one species by the total number of individuals of all species. These measures are part of community ecology. Understanding patterns within a community is easier when species richness is low, but most communities are species-rich. Measuring species abundance helps reveal how species are distributed within an ecosystem. For example, salt marshes receive an influx of seawater, so only a few species adapted to both salt and fresh water tend to be abundant. Conversely, in land-locked wetlands species abundance is more evenly distributed among those that live there. In most ecosystems where abundance has been measured, a small number of species are abundant while many are rare. Abundant species are often generalists, whereas many rare species are specialists. A species that occurs at high density in multiple localities is likely to be relatively abundant across the region; high local abundance therefore tends to be linked to broad regional distribution. Species with high abundance also tend to produce more offspring, which increases the chance of colonizing new areas, creating a positive feedback loop that produces a few widespread "core" species and many restricted, scarce "satellite" species. Species abundance distribution (SAD) describes how common or rare species are within an ecosystem and allows researchers to assess how species are distributed. SAD is one of the most basic measurements in ecology and is used frequently; many methods for measurement and analysis have developed. One example is semi-quantitative abundance ratings, which involve estimating abundance within a specified area (quadrat). Two commonly used scales are A.C.F.O.R. and D.A.F.O.R. The A.C.F.O.R. scale: A — Abundant C — Common F — Frequent O — Occasional R — Rare The D.A.F.O.R. scale: D — Dominant A — Abundant F — Frequent O — Occasional R — Rare These methods are useful for obtaining rough estimates of species abundance, but they are not exact or fully objective. If a more quantitative method is available, it should be used to produce more reliable and measurable data. See also: Abundance estimation; Cover-abundance; Living Planet Index; Occupancy–abundance relationship; Plant cover; Range (biology); Relative abundance distribution; Species richness; References; External links; "Abundance in ecology" (article, with works cited); Ecology terminology; Biodiversity.
In ecology, local abundance is the relative representation of a species in a particular ecosystem, usually measured as the number of individuals found per sample. The ratio of the abundance of one species to that of other species in the same ecosystem is referred to as relative species abundance. Both indicators are relevant for computing biodiversity. A variety of sampling methods are used to measure abundance. For larger animals these include spotlight counts, track counts, roadkill counts, and observations at monitoring stations. In many plant communities, abundance is measured by plant cover—the relative area occupied by different species in a small plot. Abundance can also be measured by identifying and counting every individual of every species in a given sector, although species distributions are often skewed so that a few species account for most individuals. Relative species abundance is calculated by dividing the number of individuals of one species by the total number of individuals of all species. These measures are part of community ecology. Understanding patterns within a community is easier when species richness is low, but most communities are species-rich. Measuring species abundance helps reveal how species are distributed within an ecosystem. For example, salt marshes receive an influx of seawater, so only a few species adapted to both salt and fresh water tend to be abundant. Conversely, in land-locked wetlands species abundance is more evenly distributed among those that live there. In most ecosystems where abundance has been measured, a small number of species are abundant while many are rare. Abundant species are often generalists, whereas many rare species are specialists. A species that occurs at high density in multiple localities is likely to be relatively abundant across the region; high local abundance therefore tends to be linked to broad regional distribution. Species with high abundance also tend to produce more offspring, which increases the chance of colonizing new areas, creating a positive feedback loop that produces a few widespread "core" species and many restricted, scarce "satellite" species. Species abundance distribution (SAD) describes how common or rare species are within an ecosystem and allows researchers to assess how species are distributed. SAD is one of the most basic measurements in ecology and is used frequently; many methods for measurement and analysis have developed. One example is semi-quantitative abundance ratings, which involve estimating abundance within a specified area (quadrat). Two commonly used scales are A.C.F.O.R. and D.A.F.O.R. The A.C.F.O.R. scale: A — Abundant C — Common F — Frequent O — Occasional R — Rare The D.A.F.O.R. scale: D — Dominant A — Abundant F — Frequent O — Occasional R — Rare These methods are useful for obtaining rough estimates of species abundance, but they are not exact or fully objective. If a more quantitative method is available, it should be used to produce more reliable and measurable data. See also: Abundance estimation; Cover-abundance; Living Planet Index; Occupancy–abundance relationship; Plant cover; Range (biology); Relative abundance distribution; Species richness; References; External links; "Abundance in ecology" (article, with works cited); Ecology terminology; Biodiversity.
long_en_212
news_en
800
en
Zhou Cui Ying, 36, remains in the hospital after a heavy tree limb fell on her at Washington Square Park in San Francisco's North Beach a week ago, leaving her paralyzed from the waist down. Zhou was watching her daughters play when the freak accident occurred. From her bed at San Francisco General, she told KTVU she is in a lot of pain and doesn't know if she'll ever walk again. "My head hurts and my back hurts. I cannot move," she said. "The bottom part, no feeling, I don't feel anything." Zhou is on strong painkillers. Her husband, Jian Con Tan, has taken time off work to care for their children, and the family has set up a GoFundMe page to help with medical expenses. On the afternoon of August 12, Zhou was at Washington Square Park with her daughters because they were early for their dental appointments in nearby Chinatown. She showed photos she had taken of 9-year-old Angelina and 5-year-old Arosia feeding the birds at the park shortly before the accident. Zhou says she was focused on her daughters and never heard or saw the limb come down. When asked if she remembered the tree falling, Zhou replied, "No, it was very fast." She says the next thing she knew, she was in the hospital. Her husband, Tony Tan, tells KTVU police picked him up from his work and rushed him to the hospital. "It took doctors 10 hours to do all the surgery for her," Tan said. He says his wife's spinal cord is damaged and that he and his daughters are struggling with what happened, worried about what the future holds. "I don't know what to do. I cannot go to work. It's really hard for me to sleep, and even at night I think about what happened for weeks and weeks. It's really sad for me," he said. The stay-at-home mom says her girls are her life. She's shocked that she's in this condition. "Sometimes, I feel sad, don't want to see another person like me," Zhou said. She says she's speaking about what happened because she doesn't want it to happen to someone else. Her hope is to be able to walk again with her daughters. "Maybe walking with them, playing. Yeah, make them happy." Zhou says she's not sure when she'll be released, but her next step is rehab. The family says no one from the city has spoken to them. San Francisco (KPIX 5) — One week after a large tree branch fell onto a woman in a San Francisco park, doctors told her husband she will never walk again. Cui Ying Zhou was at Washington Square Park with her two daughters last Friday when a tree branch snapped and landed on her head. The limb weighed about 100 pounds and fell from a pine tree. Her husband, Jian Cong Tan, told KPIX 5 that the branch fractured his wife’s skull and broke her lower spine. Doctors said Zhou will definitely be paralyzed, even if they can reconnect her spinal cord. "Her lower spine was broken. The doctor showed me the X-ray and said, 'Her lower spine separated into two parts; her skull was broken too,'" Tan said. Tan said their daughters, who are only five and nine years old, are too distraught to go back to school. He has taken leave from work to care for the girls, who witnessed the accident. Despite the seriousness of her injuries, Zhou is showing some signs of improvement. "It’s amazing that she’s still breathing and can talk to me right now. It’s very amazing. It’s like a miracle," Tan said. He said he hasn’t slept all week and is just trying to stay as strong as he can for the girls. "They need help as well, because they saw what happened and probably hide everything inside for themselves. They probably don’t want to talk about it," he added. San Francisco Mayor Ed Lee released a statement the day after the accident. A statement said, "His thoughts are with the family... and we are praying for the recovery of this young woman." Those words offered little consolation to a family that has been shattered. "I don't know what to say. It's so terrible; this is the worst day of my life right now," said Tan. "I feel really terrible." Arborists have been out in San Francisco parks all week doing visual assessments. They say the canary pines are in good condition and are calling the incident a freak, tragic accident. Zhou is expected to be out of the ICU in a couple of days.
Zhou Cui Ying, thirty-six, remains in the hospital after a heavy tree limb fell on her at Washington Square Park in San Francisco's North Beach a week ago, leaving her paralyzed from the waist down. Zhou was watching her daughters play when the freak accident occurred. From her bed at San Francisco General, she told KTVU she is in a lot of pain and doesn't know if she'll ever walk again. "My head hurts and my back hurts. I cannot move," she said. "The bottom part, no feeling, I don't feel anything." Zhou is on strong painkillers. Her husband, Jian Con Tan, has taken time off work to care for their children, and the family has set up a GoFundMe page to help with medical expenses. On the afternoon of August twelve, Zhou was at Washington Square Park with her daughters because they were early for their dental appointments in nearby Chinatown. She showed photos she had taken of nine-year-old Angelina and five-year-old Arosia feeding the birds at the park shortly before the accident. Zhou says she was focused on her daughters and never heard or saw the limb come down. When asked if she remembered the tree falling, Zhou replied, "No, it was very fast." She says the next thing she knew, she was in the hospital. Her husband, Tony Tan, tells KTVU police picked him up from his work and rushed him to the hospital. "It took doctors ten hours to do all the surgery for her," Tan said. He says his wife's spinal cord is damaged and that he and his daughters are struggling with what happened, worried about what the future holds. "I don't know what to do. I cannot go to work. It's really hard for me to sleep, and even at night I think about what happened for weeks and weeks. It's really sad for me," he said. The stay-at-home mom says her girls are her life. She's shocked that she's in this condition. "Sometimes, I feel sad, don't want to see another person like me," Zhou said. She says she's speaking about what happened because she doesn't want it to happen to someone else. Her hope is to be able to walk again with her daughters. "Maybe walking with them, playing. Yeah, make them happy." Zhou says she's not sure when she'll be released, but her next step is rehab. The family says no one from the city has spoken to them. San Francisco (KPIX five) — One week after a large tree branch fell onto a woman in a San Francisco park, doctors told her husband she will never walk again. Cui Ying Zhou was at Washington Square Park with her two daughters last Friday when a tree branch snapped and landed on her head. The limb weighed about one hundred pounds and fell from a pine tree. Her husband, Jian Cong Tan, told KPIX five that the branch fractured his wife’s skull and broke her lower spine. Doctors said Zhou will definitely be paralyzed, even if they can reconnect her spinal cord. "Her lower spine was broken." The doctor showed me the X-ray and said, 'Her lower spine separated into two parts; her skull was broken too,'" Tan said. Tan said their daughters, who are only five and nine years old, are too distraught to go back to school. He has taken leave from work to care for the girls, who witnessed the accident. Despite the seriousness of her injuries, Zhou is showing some signs of improvement. "It’s amazing that she’s still breathing and can talk to me right now. It’s very amazing. It’s like a miracle," Tan said. He said he hasn’t slept all week and is just trying to stay as strong as he can for the girls. "They need help as well, because they saw what happened and probably hide everything inside for themselves. They probably don’t want to talk about it," he added. San Francisco Mayor Ed Lee released a statement the day after the accident. A statement said, "His thoughts are with the family... and we are praying for the recovery of this young woman." Those words offered little consolation to a family that has been shattered. "I don't know what to say. It's so terrible; this is the worst day of my life right now," said Tan. "I feel really terrible." Arborists have been out in San Francisco parks all week doing visual assessments. They say the canary pines are in good condition and are calling the incident a freak, tragic accident. Zhou is expected to be out of the ICU in a couple of days.
long_en_158
paper_en
1,384
en
This dual approach ensures that the training data is not only learnable but also aligned with human expectations. Ultimately, we construct a dataset consisting of approximately 150,000 training pairs. The model is then trained for one epoch using the Online Merging Optimizer, with a learning rate of 7 times 10 to the power of negative 7. Subsection 4.3 Online Reinforcement Learning To develop a robust reward model for online RL, we adhere to a set of carefully defined labeling criteria. Those criteria ensure that the responses generated by the model are not only high-quality but also aligned with ethical and user-centric standards. The specific guidelines for data labeling are as follows: Truthfulness: Responses must be grounded in factual accuracy, faithfully reflecting the provided context and instructions. The model should avoid generating information that is false or unsupported by the given data. Helpfulness: The model's output should be genuinely useful, addressing the user's query effectively while providing content that is positive, engaging, educational, and relevant. It should follow the given instructions precisely and offer value to the user. Conciseness: Responses should be succinct and to the point, avoiding unnecessary verbosity. The goal is to convey information clearly and efficiently without overwhelming the user with excessive detail. Relevance: All parts of the response should be directly related to the user's query, dialogue history, and the assistant's context. The model should tailor its output to ensure it is perfectly aligned with the user's needs and expectations. Harmlessness: The model must prioritize user safety by avoiding any content that could lead to illegal, immoral, or harmful behavior. It should promote ethical conduct and responsible communication at all times. Debiasing: The model should produce responses that are free from bias, including but not limited to gender, race, nationality, and politics. It should treat all topics equally and fairly, adhering to widely accepted moral and ethical standards. The queries utilized to train the reward model are drawn from two distinct datasets: publicly available open-source data and a proprietary query set characterized by higher complexity. Responses are generated from checkpoints of the Qwen models, which have been fine-tuned using different methods--SFT, DPO, and RL--at various stages of training. To introduce diversity, those responses are sampled at different temperature settings. Preference pairs are created through both human and automated labeling processes, and the training data for DPO is also integrated into this dataset. In our online reinforcement learning (RL) framework, we employ Group Relative Policy Optimization (GRPO). The query set utilized for training the reward model is identical to the one used in the RL training phase. The sequence in which queries are processed during training is determined by the variance of their response scores, as evaluated by the reward model. Specifically, queries with higher variance in response scores are prioritized to ensure more effective learning. We sample 8 responses for each query. All models are trained with a 2048 global batch size and 2048 samples in each episode, considering a pair of queries and responses as a sample. Subsection 4.4 Long Context Fine-tuning To further extend the context length of Qwen2.5-Turbo, we introduce longer SFT examples during post-training, enabling it to better align with human preference in long queries. In the SFT phase, we employ a two-stage approach. In the first stage, the model is fine-tuned exclusively using short instructions, each containing up to 32,768 tokens. This stage uses the same data and training steps as those employed for the other Qwen2.5 models, ensuring strong performance on short tasks. In the second stage, the fine-tuning process combines both short instructions (up to 32,768 tokens) and long instructions (up to 262,144 tokens). This hybrid approach effectively enhances the model's instruction-following ability in long context tasks while maintaining its performance on short tasks. During the RL stage, we use a training strategy similar to that used for the other Qwen2.5 models, focusing solely on short instructions. This design choice is driven by two primary considerations: first, RL training is computationally expensive for long context tasks; second, there is currently a scarcity of reward models that provide suitable reward signals for long context tasks. Additionally, we find that adopting RL on short instructions alone can still significantly enhance the model's alignment with human preferences in long context tasks. Section 5 Evaluation The base models produced by pre-training and the instruction-tuned models produced by post-training are evaluated accordingly with a comprehensive evaluation suite, including both commonly-used open benchmarks and skill-oriented in-house datasets. The evaluation suite is designed to be primarily automatic with minimal human interaction. To prevent test data leakage, we exclude potentially contaminated data using n-gram matching when constructing the pre-training and post-training datasets. Following the criteria used in Qwen2, a training sequence is removed from the training data if there exists a test sequence such that the length of the longest common subsequence between the two tokenized sequences is greater than or equal to 13 and the length of the longest common subsequence is greater than or equal to 0.6 times the minimum of the lengths of the two sequences. Subsection 5.1 Base Models We conduct comprehensive evaluations of the base language models of the Qwen2.5 series. The evaluation of base models primarily emphasizes their performance in natural language understanding, general question answering, coding, mathematics, scientific knowledge, reasoning, and multilingual capabilities. The evaluation datasets include: General Tasks MMLU (5-shot), MMLU-Pro (5-shot), MMLU-redux (5-shot), BBH (3-shot), ARC-C (25-shot), TruthfulQA (0-shot), Winogrande (5-shot), HellaSwag (10-shot). Mathematics & Science Tasks GPQA (5-shot), Theorem QA (5-shot), GSM8K (4-shot), MATH (4-shot). Coding Tasks HumanEval (0-shot), HumanEval+ (0-shot), MBPP (0-shot), MBPP+ (0-shot), MultiPL-E (0-shot) (Python, C++, JAVA, PHP, TypeScript, C#, Bash, JavaScript). Multilingual Tasks We group them into four categories: (a) Exam: M3Exam (5-shot, we only choose examples that require no image), IndoMMLU (3-shot), ruMMLU (5-shot), and translated MMLU (5-shot on Arabic, Spanish, French, Portuguese, German, Italian, Japanese, and Korean); (b) Understanding: BELEBELE (5-shot), XCOPA (5-shot), XWinograd (5-shot), XStoryCloze (0-shot) and PAWS-X (5-shot); (c) Mathematics: MGSM (8-shot CoT); and (d) Translation: Flores-101 (5-shot). For base models, we compare Qwen2.5 models with Qwen2 models and other leading open-weight models in terms of scales of parameters. Qwen2.5-72B & Qwen2.5-Plus We compare the base models of Qwen2.5-72B and Qwen2.5-Plus to other leading open-weight base models: Llama3-70B, Llama3-405B, Mixtrail-8x22B, and our previous 72B version, the Qwen2-72B. The Qwen2.5-72B base model significantly outperforms its peers in the same category across a wide range of tasks. It achieves results comparable to Llama-3-405B while utilizing only one-fifth of the parameters. Furthermore, when compared to its predecessor, Qwen2-72B, the Qwen2.5-72B shows marked improvements in nearly all benchmark evaluations, particularly excelling in general tasks, mathematics, and coding challenges. With significantly lower training and inference costs, Qwen2.5-Plus achieves very competitive performance results compared to Qwen2.5-72B and Llama3-405B, outperforming other baseline models on the Hellaswag, TheoremQA, MATH, GSM8K, MultiPL-E, Multi-Mathematics, and Multi-Translation. Moreover, Qwen2.5-Plus achieves 64.0 on MMLU-Pro, which is 5.9 points higher than Qwen2.5-72B. Qwen2.5-14B/32B & Qwen2.5-Turbo The evaluation of the Qwen2.5-Turbo, Qwen2.5-14B, and 32B models is compared against baselines of similar sizes. These baselines include Yi-1.5-34B, Gemma2-27B, and Qwen1.5-32B. The Qwen2.5-14B model demonstrates a solid performance across various tasks, particularly excelling in general tasks like MMLU and BBH, where it achieves scores of 79.7 and 78.2, outcompeting competitors of larger sizes. Meanwhile, Qwen2.5-32B, in particular, showcases exceptional capabilities, often surpassing larger models of similar model sizes. Notably, it outperforms its predecessor Qwen1.5-32B significantly, especially in challenging areas such as mathematics and coding, with notable scores of 57.7 in MATH and 84.5 in MBPP.
This dual approach ensures that the training data is not only learnable but also aligned with human expectations. Ultimately, we construct a dataset consisting of approximately one hundred fifty thousand training pairs. The model is then trained for one epoch using the Online Merging Optimizer, with a learning rate of seven times ten to the power of negative seven. Subsection four point three Online Reinforcement Learning To develop a robust reward model for online RL, we adhere to a set of carefully defined labeling criteria. Those criteria ensure that the responses generated by the model are not only high-quality but also aligned with ethical and user-centric standards. The specific guidelines for data labeling are as follows: Truthfulness: Responses must be grounded in factual accuracy, faithfully reflecting the provided context and instructions. The model should avoid generating information that is false or unsupported by the given data. Helpfulness: The model's output should be genuinely useful, addressing the user's query effectively while providing content that is positive, engaging, educational, and relevant. It should follow the given instructions precisely and offer value to the user. Conciseness: Responses should be succinct and to the point, avoiding unnecessary verbosity. The goal is to convey information clearly and efficiently without overwhelming the user with excessive detail. Relevance: All parts of the response should be directly related to the user's query, dialogue history, and the assistant's context. The model should tailor its output to ensure it is perfectly aligned with the user's needs and expectations. Harmlessness: The model must prioritize user safety by avoiding any content that could lead to illegal, immoral, or harmful behavior. It should promote ethical conduct and responsible communication at all times. Debiasing: The model should produce responses that are free from bias, including but not limited to gender, race, nationality, and politics. It should treat all topics equally and fairly, adhering to widely accepted moral and ethical standards. The queries utilized to train the reward model are drawn from two distinct datasets: publicly available open-source data and a proprietary query set characterized by higher complexity. Responses are generated from checkpoints of the Qwen models, which have been fine-tuned using different methods--SFT, DPO, and RL--at various stages of training. To introduce diversity, those responses are sampled at different temperature settings. Preference pairs are created through both human and automated labeling processes, and the training data for DPO is also integrated into this dataset. In our online reinforcement learning (RL) framework, we employ Group Relative Policy Optimization (GRPO). The query set utilized for training the reward model is identical to the one used in the RL training phase. The sequence in which queries are processed during training is determined by the variance of their response scores, as evaluated by the reward model. Specifically, queries with higher variance in response scores are prioritized to ensure more effective learning. We sample eight responses for each query. All models are trained with a two thousand forty-eight global batch size and two thousand forty-eight samples in each episode, considering a pair of queries and responses as a sample. Subsection four point four Long Context Fine-tuning To further extend the context length of Qwen two point five Turbo, we introduce longer SFT examples during post-training, enabling it to better align with human preference in long queries. In the SFT phase, we employ a two-stage approach. In the first stage, the model is fine-tuned exclusively using short instructions, each containing up to thirty-two thousand seven hundred sixty-eight tokens. This stage uses the same data and training steps as those employed for the other Qwen two point five models, ensuring strong performance on short tasks. In the second stage, the fine-tuning process combines both short instructions (up to thirty-two thousand seven hundred sixty-eight tokens) and long instructions (up to two hundred sixty-two thousand one hundred forty-four tokens). This hybrid approach effectively enhances the model's instruction-following ability in long context tasks while maintaining its performance on short tasks. During the RL stage, we use a training strategy similar to that used for the other Qwen two point five models, focusing solely on short instructions. This design choice is driven by two primary considerations: first, RL training is computationally expensive for long context tasks; second, there is currently a scarcity of reward models that provide suitable reward signals for long context tasks. Additionally, we find that adopting RL on short instructions alone can still significantly enhance the model's alignment with human preferences in long context tasks. Section five Evaluation The base models produced by pre-training and the instruction-tuned models produced by post-training are evaluated accordingly with a comprehensive evaluation suite, including both commonly-used open benchmarks and skill-oriented in-house datasets. The evaluation suite is designed to be primarily automatic with minimal human interaction. To prevent test data leakage, we exclude potentially contaminated data using n-gram matching when constructing the pre-training and post-training datasets. Following the criteria used in Qwen two, a training sequence is removed from the training data if there exists a test sequence such that the length of the longest common subsequence between the two tokenized sequences is greater than or equal to thirteen and the length of the longest common subsequence is greater than or equal to zero point six times the minimum of the lengths of the two sequences. Subsection five point one Base Models We conduct comprehensive evaluations of the base language models of the Qwen two point five series. The evaluation of base models primarily emphasizes their performance in natural language understanding, general question answering, coding, mathematics, scientific knowledge, reasoning, and multilingual capabilities. The evaluation datasets include: General Tasks MMLU (five-shot), MMLU Pro (five-shot), MMLU redux (five-shot), BBH (three-shot), ARC C (twenty five-shot), TruthfulQA (zero-shot), Winogrande (five-shot), HellaSwag (ten-shot). Mathematics & Science Tasks GPQA (five-shot), Theorem QA (five-shot), GSM eight K (four-shot), MATH (four-shot). Coding Tasks HumanEval (zero-shot), HumanEval plus (zero-shot), MBPP (zero-shot), MBPP plus (zero-shot), MultiPL E (zero-shot) (Python, C plus plus, JAVA, PHP, TypeScript, C sharp, Bash, JavaScript). Multilingual Tasks We group them into four categories: (a) Exam: M three Exam (five-shot, we only choose examples that require no image), IndoMMLU (three-shot), ruMMLU (five-shot), and translated MMLU (five-shot on Arabic, Spanish, French, Portuguese, German, Italian, Japanese, and Korean); (b) Understanding: BELEBELE (five-shot), XCOPA (five-shot), XWinograd (five-shot), XStoryCloze (zero-shot) and PAWS-X (five-shot); (c) Mathematics: MGSM (eight-shot CoT); and (d) Translation: Flores one hundred one (five-shot). For base models, we compare Qwen two point five models with Qwen two models and other leading open-weight models in terms of scales of parameters. Qwen two point five seventy two B & Qwen two point five Plus We compare the base models of Qwen two point five seventy two B and Qwen two point five Plus to other leading open-weight base models: Llama three seventy B, Llama three four hundred five B, Mixtrail eight x twenty two B, and our previous seventy two B version, the Qwen two seventy two B. The Qwen two point five seventy two B base model significantly outperforms its peers in the same category across a wide range of tasks. It achieves results comparable to Llama three four hundred five B while utilizing only one-fifth of the parameters. Furthermore, when compared to its predecessor, Qwen two seventy two B, the Qwen two point five seventy two B shows marked improvements in nearly all benchmark evaluations, particularly excelling in general tasks, mathematics, and coding challenges. With significantly lower training and inference costs, Qwen two point five Plus achieves very competitive performance results compared to Qwen two point five seventy two B and Llama three four hundred five B, outperforming other baseline models on the Hellaswag, TheoremQA, MATH, GSM eight K, MultiPL-E, Multi-Mathematics, and Multi-Translation. Moreover, Qwen two point five Plus achieves sixty four point zero on MMLU-Pro, which is five point nine points higher than Qwen two point five seventy two B. Qwen two point five fourteen B/ thirty two B & Qwen two point five Turbo The evaluation of the Qwen two point five Turbo, Qwen two point five fourteen B, and thirty two B models is compared against baselines of similar sizes. These baselines include Yi one point five thirty four B, Gemma two twenty seven B, and Qwen one point five thirty two B. The Qwen two point five fourteen B model demonstrates a solid performance across various tasks, particularly excelling in general tasks like MMLU and BBH, where it achieves scores of seventy nine point seven and seventy eight point two, outcompeting competitors of larger sizes. Meanwhile, Qwen two point five thirty two B, in particular, showcases exceptional capabilities, often surpassing larger models of similar model sizes. Notably, it outperforms its predecessor Qwen one point five thirty two B significantly, especially in challenging areas such as mathematics and coding, with notable scores of fifty seven point seven in MATH and eighty four point five in MBPP.
long_en_244
news_en
912
en
March 29, 2018 — Lured by rising SUV sales, automakers flood market with models By Nick Carey NEW YORK (Reuters) — Demand for sport utility vehicles in the United States is booming, but the number of new models vying for a share of that market is growing even faster, threatening the fat profits automakers have enjoyed. The 2019 Lincoln Aviator was displayed at an event on the eve of the 2018 New York International Auto Show in New York City on March 27, 2018. At the New York Auto Show this week, automakers will unveil another flock of SUVs ranging from a revamped Toyota RAV4, Toyota Motor Corp.’s top-selling model in 2017, to flashy new luxury Cadillac and Lincoln SUVs. Premium brands such as Fiat Chrysler Automobiles NV’s Maserati, which once dealt exclusively in low-slung sports cars, are getting into the game. “I think everyone has read the same tea leaves — right now there seems to be insatiable demand,” said General Motors Co.’s Johan de Nysschen, referring to SUVs and crossovers. De Nysschen, head of GM’s Cadillac luxury division, spoke to Reuters on Wednesday while standing next to the brand’s new XT4 crossover model. “Everyone is going into these segments with compelling new entries,” he said, “and that means there are going to be winners and there are going to be losers.” He added: “We aim to be among the winners.” According to data from automotive consultancy LMC Automotive, by 2023 there will be 90 mainstream SUV and crossover models on the U.S. market, as well as 90 luxury models, compared with 2017 levels of 65 mainstream SUV and crossover models and 53 luxury models. Premium automakers like BMW AG (BMWG.DE), Mercedes‑Benz and Volkswagen AG’s (VOWG_p.DE) Audi brand are expanding their U.S. sport-utility vehicle plants. U.S. sales of mainstream and luxury SUVs and crossovers alike have more than doubled since 2010 and rose 5 percent and 7 percent, respectively, last year — even though overall industry sales declined 2 percent in 2017. LMC Automotive forecasts that growth will slow for SUVs and crossovers in 2018 and every year through 2025, even as the number of models on the market is set to rise. For an interactive graphic on "The Rise of the SUV," see tmsnrt.rs/2I6YBSX. “There are still some legs left to grow in the SUV market, but growth is slowing and will eventually level off,” said Jeff Schuster, LMC’s senior vice president of forecasting. "This is a bright spot in the market, which is why everyone is flocking to it with new products. Over the next few years, millions of nearly new SUV and crossover models will come off lease and return to the market, providing cheap competition for new models. Around 40 percent of the roughly 4 million nearly new vehicles that will come off lease in 2018 in the United States will be SUVs and crossovers, rising to 44 percent in 2019 and 47 percent in 2020, according to Cox Automotive forecasts. “Now that you’re seeing more SUVs starting to come off lease, that will automatically put pressure on new SUV pricing,” said Karl Brauer, executive publisher for automotive research firm Kelley Blue Book (KBB). Simple math: Sam Fiorani, vice president of global vehicle forecasting with AutoForecast Solutions, has a more optimistic view of the automakers’ chances. Just as a wide variety of niches has developed in the passenger car market, as SUV and crossover sales continue to grow there is room for automakers to roll out an assortment of sizes, limited editions and sporty models. “The market is not yet saturated and there are all kinds of niches that have yet to be filled,” Fiorani said. “We’re five or 10 years from even thinking about market saturation.” Automakers also maintain that there is room for many more options as long as they can stand out in a crowd. "There are clearly a lot of entrants, but we are going to differentiate ourselves with a completely different look to our brand," Joy Falotico, head of Ford Motor Co.'s (F.N.) luxury Lincoln division, said in an interview Tuesday. Ford’s new midsize SUV with three rows of seats, the Aviator, will hit the U.S. market in 2019. Rather than the aggressive or dominant forms adopted by Lincoln’s competitors, Falotico said the Aviator’s form is meant to evoke "beauty and calmness." But according to KBB’s Brauer, "Simple math suggests that you’ll have more models with lower volume. You can’t have that many SUVs on the market and have all of them grow volume. Some of them are going to have to give." Brauer points to average vehicle prices in the SUV and crossover market as an indication of what is happening in the segment. Cumulatively, prices rose just 4 percent from 2012 to 2017, but the average price dipped 0.5 percent to $35,991 in 2017 from $36,163 in 2016, according to KBB data. Faced with more competition in a slower-growing market, automakers will likely be forced to resort to consumer discounts to boost sales, which will cut into profit margins. "The idea that you’re going to make the same profit as you did three years ago is probably unlikely," Brauer said. "The good news is that they have room to lower margins." Reporting by Nick Carey; Editing by Matthew Lewis.
March twenty nine, two thousand eighteen — Lured by rising SUV sales, automakers flood market with models By Nick Carey NEW YORK (Reuters) — Demand for sport utility vehicles in the United States is booming, but the number of new models vying for a share of that market is growing even faster, threatening the fat profits automakers have enjoyed. The two thousand nineteen Lincoln Aviator was displayed at an event on the eve of the two thousand eighteen New York International Auto Show in New York City on March twenty seven, two thousand eighteen. At the New York Auto Show this week, automakers will unveil another flock of SUVs ranging from a revamped Toyota RAV four, Toyota Motor Corp.’s top-selling model in two thousand seventeen, to flashy new luxury Cadillac and Lincoln SUVs. Premium brands such as Fiat Chrysler Automobiles NV’s Maserati, which once dealt exclusively in low-slung sports cars, are getting into the game. “I think everyone has read the same tea leaves — right now there seems to be insatiable demand,” said General Motors Co.’s Johan de Nysschen, referring to SUVs and crossovers. De Nysschen, head of GM’s Cadillac luxury division, spoke to Reuters on Wednesday while standing next to the brand’s new XT four crossover model. “Everyone is going into these segments with compelling new entries,” he said, “and that means there are going to be winners and there are going to be losers.” He added: “We aim to be among the winners.” According to data from automotive consultancy LMC Automotive, by two thousand twenty three there will be ninety mainstream SUV and crossover models on the U.S. market, as well as ninety luxury models, compared with two thousand seventeen levels of sixty five mainstream SUV and crossover models and fifty three luxury models. Premium automakers like BMW AG (BMWG.DE), Mercedes‑Benz and Volkswagen AG’s (VOWG_p.DE) Audi brand are expanding their U.S. sport-utility vehicle plants. U.S. sales of mainstream and luxury SUVs and crossovers alike have more than doubled since two thousand ten and rose five percent and seven percent, respectively, last year — even though overall industry sales declined two percent in two thousand seventeen. LMC Automotive forecasts that growth will slow for SUVs and crossovers in two thousand eighteen and every year through two thousand twenty five, even as the number of models on the market is set to rise. For an interactive graphic on "The Rise of the SUV," see tmsnrt dot rs slash two I six Y B S X. “There are still some legs left to grow in the SUV market, but growth is slowing and will eventually level off,” said Jeff Schuster, LMC’s senior vice president of forecasting. "This is a bright spot in the market, which is why everyone is flocking to it with new products. Over the next few years, millions of nearly new SUV and crossover models will come off lease and return to the market, providing cheap competition for new models. Around forty percent of the roughly four million nearly new vehicles that will come off lease in two thousand eighteen in the United States will be SUVs and crossovers, rising to forty four percent in two thousand nineteen and forty seven percent in two thousand twenty, according to Cox Automotive forecasts. “Now that you’re seeing more SUVs starting to come off lease, that will automatically put pressure on new SUV pricing,” said Karl Brauer, executive publisher for automotive research firm Kelley Blue Book (KBB). Simple math: Sam Fiorani, vice president of global vehicle forecasting with AutoForecast Solutions, has a more optimistic view of the automakers’ chances. Just as a wide variety of niches has developed in the passenger car market, as SUV and crossover sales continue to grow there is room for automakers to roll out an assortment of sizes, limited editions and sporty models. “The market is not yet saturated and there are all kinds of niches that have yet to be filled,” Fiorani said. “We’re five or ten years from even thinking about market saturation.” Automakers also maintain that there is room for many more options as long as they can stand out in a crowd. "There are clearly a lot of entrants, but we are going to differentiate ourselves with a completely different look to our brand," Joy Falotico, head of Ford Motor Co.'s (F.N.) luxury Lincoln division, said in an interview Tuesday. Ford’s new midsize SUV with three rows of seats, the Aviator, will hit the U.S. market in two thousand nineteen. Rather than the aggressive or dominant forms adopted by Lincoln’s competitors, Falotico said the Aviator’s form is meant to evoke "beauty and calmness." But according to KBB’s Brauer, "Simple math suggests that you’ll have more models with lower volume. You can’t have that many SUVs on the market and have all of them grow volume. Some of them are going to have to give." Brauer points to average vehicle prices in the SUV and crossover market as an indication of what is happening in the segment. Cumulatively, prices rose just four percent from two thousand twelve to two thousand seventeen, but the average price dipped zero point five percent to dollar thirty five thousand nine hundred ninety one in two thousand seventeen from dollar thirty six thousand one hundred sixty three in two thousand sixteen, according to KBB data. Faced with more competition in a slower-growing market, automakers will likely be forced to resort to consumer discounts to boost sales, which will cut into profit margins. "The idea that you’re going to make the same profit as you did three years ago is probably unlikely," Brauer said. "The good news is that they have room to lower margins." Reporting by Nick Carey; Editing by Matthew Lewis.
long_en_320
poet_en
778
en
Ten years ago, I took off for Washington, DC, for a three-month travel nurse assignment. I went by myself. Was I scared? Yes, indeedy. My family wasn't exactly in shambles, but a reasonable facsimile thereof, and I wanted to get away—far away. My heart was still heavily bruised from a relationship that fell apart, and I wanted a challenge professionally. I had heard horror stories about the hospital where I'd be working, but I didn't really care. They all turned out to be true, by the way—and then some. I mainly wanted to be where I only had to take care of myself and my dog, and where, at any given time, nobody would know exactly where I was or who I was. The travel company would pay for my furnished apartment, utilities, and travel expenses. Sounded good to me, so I hit the road. My apartment was just a couple of minutes away from National Airport, the Pentagon, and all kinds of cool things. I got all moved in pretty painlessly. My best friend had ridden up with me, and we did lots of sightseeing that first weekend. At one point we were waiting for the Metro at the Pentagon—I hadn't quite figured out the system at that point—and I thought I'd snap a picture. From somewhere up above a deep voice boomed, "Don't take that picture!" I stopped and looked up, expecting to see God, and heard the voice again. Then I noticed a security guard slowly driving past. I was kind of freaked out, I must admit, but I showed them. Later, I took a photo of the Pentagon from the top of the Washington Monument. The hospital where I worked was pretty much a nightmare. I worked in the pediatric ER, which was severely understaffed and very busy. The people I worked with were nice and for the most part helpful, but it was just crazy all the time. For the first time in my life, I lost weight without trying because I never had time to eat while I was at work, and when I wasn't at work I was often too tired to eat. And if you know me, you know how tired I was. I was expecting top-of-the-line, cutting-edge equipment and facilities; boy, was I wrong. Some of their stuff had been around since the days of Ben Franklin. Thanks to a fellow travel nurse, I managed to figure out which section of the ER was the best to work in and was quick to volunteer for it. It helped that most of the other nurses didn't like working in that area, but for me it was great. The doctors only came over when necessary, and I was pretty much able to do my work without much interference. That place was really busy. On days off when I wasn't exhausted, I'd go exploring. DC was great. I loved walking around the National Mall and going to all the museums. I had never been so alone in my life, but I really kind of liked it. It was fun not having to answer to anyone, to just go with the flow. My apartment was small and cozy, my nearly blind dog adjusted easily, and I had everything I could ever need just a stone's throw away. Several friends came to visit, and it was fun checking out all the coolness of DC with them. Before it was all over I even met a guy I really liked. If I had liked the job, everything would have been perfect. Hard to believe that was ten years ago; it seems like yesterday in so many ways. I'll always be glad I did it, because if I hadn't I would have regretted it for the rest of my life. In fact, I enjoyed DC so much that I came back the next year—well, kind of. I lived and worked in Virginia, out in the suburbs of DC. That hospital was like a vacation. I even extended my contract and stayed three extra months. If the cost of living weren't so outrageous, I'd have stayed there permanently; that's how much I liked working at that hospital: fun, nice people, easy workload, low-acuity patients for the most part. What more could you want from a pediatric ER job? But as they say, all good things must come to an end. I came back home and to my old job. It wasn't a bad gig, and the people were great.
Ten years ago, I took off for Washington, DC, for a three-month travel nurse assignment. I went by myself. Was I scared? Yes, indeedy. My family wasn't exactly in shambles, but a reasonable facsimile thereof, and I wanted to get away—far away. My heart was still heavily bruised from a relationship that fell apart, and I wanted a challenge professionally. I had heard horror stories about the hospital where I'd be working, but I didn't really care. They all turned out to be true, by the way—and then some. I mainly wanted to be where I only had to take care of myself and my dog, and where, at any given time, nobody would know exactly where I was or who I was. The travel company would pay for my furnished apartment, utilities, and travel expenses. Sounded good to me, so I hit the road. My apartment was just a couple of minutes away from National Airport, the Pentagon, and all kinds of cool things. I got all moved in pretty painlessly. My best friend had ridden up with me, and we did lots of sightseeing that first weekend. At one point we were waiting for the Metro at the Pentagon—I hadn't quite figured out the system at that point—and I thought I'd snap a picture. From somewhere up above a deep voice boomed, "Don't take that picture!" I stopped and looked up, expecting to see God, and heard the voice again. Then I noticed a security guard slowly driving past. I was kind of freaked out, I must admit, but I showed them. Later, I took a photo of the Pentagon from the top of the Washington Monument. The hospital where I worked was pretty much a nightmare. I worked in the pediatric ER, which was severely understaffed and very busy. The people I worked with were nice and for the most part helpful, but it was just crazy all the time. For the first time in my life, I lost weight without trying because I never had time to eat while I was at work, and when I wasn't at work I was often too tired to eat. And if you know me, you know how tired I was. I was expecting top-of-the-line, cutting-edge equipment and facilities; boy, was I wrong. Some of their stuff had been around since the days of Ben Franklin. Thanks to a fellow travel nurse, I managed to figure out which section of the ER was the best to work in and was quick to volunteer for it. It helped that most of the other nurses didn't like working in that area, but for me it was great. The doctors only came over when necessary, and I was pretty much able to do my work without much interference. That place was really busy. On days off when I wasn't exhausted, I'd go exploring. DC was great. I loved walking around the National Mall and going to all the museums. I had never been so alone in my life, but I really kind of liked it. It was fun not having to answer to anyone, to just go with the flow. My apartment was small and cozy, my nearly blind dog adjusted easily, and I had everything I could ever need just a stone's throw away. Several friends came to visit, and it was fun checking out all the coolness of DC with them. Before it was all over I even met a guy I really liked. If I had liked the job, everything would have been perfect. Hard to believe that was ten years ago; it seems like yesterday in so many ways. I'll always be glad I did it, because if I hadn't I would have regretted it for the rest of my life. In fact, I enjoyed DC so much that I came back the next year—well, kind of. I lived and worked in Virginia, out in the suburbs of DC. That hospital was like a vacation. I even extended my contract and stayed three extra months. If the cost of living weren't so outrageous, I'd have stayed there permanently; that's how much I liked working at that hospital: fun, nice people, easy workload, low-acuity patients for the most part. What more could you want from a pediatric ER job? But as they say, all good things must come to an end. I came back home and to my old job. It wasn't a bad gig, and the people were great.
long_en_201
news_en
1,000
en
Senators on both sides of the aisle unanimously passed a bill to provide continuing health benefits and compensation to first responders who became ill after the 9/11 attacks. Senate Democrats struck a deal Wednesday with Sen. Tom Coburn, R‑Okla., who agreed to drop his objections after the bill's cost was reduced by about $2 billion. Coburn emerged from a closed-door meeting with Senate Majority Leader Harry Reid and New York Democrats Chuck Schumer and Kirsten Gillibrand to announce the agreement. Under the deal, the total ten-year cost would fall from $6.2 billion to $4.2 billion: $1.5 billion for health benefits and $2.7 billion for compensation. The proposal later passed the House, 206-60, and was headed to President Obama's desk. Despite its popularity, the 9/11 health bill had been delayed in the Senate by Coburn, who faced criticism for opposing it as "overly generous" and containing "unnecessary and duplicative compensation funds." "I'll stand in the way of anything that doesn't make sense and doesn't spend our money wisely, so, you know, it doesn't matter what the issue is—we're in such a hole, Jon, that we don't have the luxury of not getting things right," Coburn told ABC News' Jonathan Karl. "And so we've come to an agreement that costs less, doesn't allow double-dipping, doesn't allow exorbitant lawyer fees, and we've worked it out, so we're going to take care of the folks, but we're going to do it in a way that doesn't punish the people who are going to pay the bill." "But the way you've been hammered on this, standing against the heroes of 9/11…" replied Karl. "I'm used to being hammered," Coburn said. "I'm not standing against them at all. I'm standing for us as America, the realization that we have to do things efficiently and economically. We've worked out a deal now that spends a whole lot less money, accomplishes exactly the same thing, and does it in a way that protects our future. Every bill should have to go through that—and the fact that they don't is a problem. That's why we're $14 trillion in debt." "So I don't mind taking the heat," he continued. "You know, as a physician I care about those people. As a citizen, I care about the firefighters of my own city and every other city. The fact is you can still do it right. So you take all the heat, but you still get it done." "So what we need is more people taking more heat so we get the right things done." "Everybody's come to an agreement. We've got a handshake. We're waiting for the paperwork. And it'll be a done deal by the end of the day. But had we not done that, we wouldn't have had it and they wouldn't have had it until next year. So the fact is we accomplished their goals and we accomplished protecting the future of the country." "And saved some money?" he was asked. "We saved a lot of money." "A lot of people are sick and hurt and we need to take care of them. They deserve it. That's why it's such a bipartisan issue," Reid told ABC News when he emerged from the meeting. The compromise mandates the closing of the Victims Compensation Fund within five years, limits fees paid out to attorneys, and closes loopholes that allow people to re-file claims that have previously been rejected by the Fund, a source told ABC News. Ken Feinberg has been floated as a possible special master for the Fund, the source said. In the past few days, Coburn's threats to try to block Senate passage of the bill this year infuriated 9/11 first responders and lawmakers alike. "Where's his heart?" asked John Feal, founder of the FealGood Foundation, a nonprofit organization pushing for the bill's passage. "Because it's not in the right place." "These men and women behind me have gone eight Christmases suffering without any help from the federal government, so I question his heart." "I believe we have the votes to prevail," said Senator Charles Schumer at a press conference on Tuesday. "The only thing standing in our way is people who will try to run out the clock. That is not fair, not right, and that flies in the face of America. Enough, enough, enough with the delays!" Now the bill appears set to pass the Senate this afternoon after the chamber ratifies the START nuclear treaty with Russia. The 9/11 bill would then need to be passed by the House, which is expected to occur later this evening; if that happens, Congress could leave town for the Christmas break late Wednesday. "We are on the verge of a Christmas miracle," Senator Kirsten Gillibrand told reporters a few days ago, and that miracle now appears set to become reality. A New York City fireman called for more rescue workers to make their way into the rubble of the World Trade Center on September 15, 2001. A retooled bill providing medical care for firefighters and other emergency responders to the September 11 attacks could be resurrected soon in the Senate, a few weeks after Republicans previously blocked the measure, backers said. "We believe we are on a path to victory by the end of this week," Schumer added, though he warned that unexpected obstacles could arise. He and fellow New York senator Kirsten Gillibrand told reporters they will propose changes to their bill to win enough Republican support for passage as Congress winds down its legislative session for the year. They hope to do this by producing a less expensive bill that would pay for itself and leave a $57 million surplus over ten years.
Senators on both sides of the aisle unanimously passed a bill to provide continuing health benefits and compensation to first responders who became ill after the nine slash eleven attacks. Senate Democrats struck a deal Wednesday with Sen. Tom Coburn, R‑Okla., who agreed to drop his objections after the bill's cost was reduced by about two billion dollars. Coburn emerged from a closed-door meeting with Senate Majority Leader Harry Reid and New York Democrats Chuck Schumer and Kirsten Gillibrand to announce the agreement. Under the deal, the total ten-year cost would fall from six point two billion dollars to four point two billion dollars: one point five billion dollars for health benefits and two point seven billion dollars for compensation. The proposal later passed the House, two hundred six to sixty, and was headed to President Obama's desk. Despite its popularity, the nine slash eleven health bill had been delayed in the Senate by Coburn, who faced criticism for opposing it as "overly generous" and containing "unnecessary and duplicative compensation funds." "I'll stand in the way of anything that doesn't make sense and doesn't spend our money wisely, so, you know, it doesn't matter what the issue is—we're in such a hole, Jon, that we don't have the luxury of not getting things right," Coburn told ABC News' Jonathan Karl. "And so we've come to an agreement that costs less, doesn't allow double-dipping, doesn't allow exorbitant lawyer fees, and we've worked it out, so we're going to take care of the folks, but we're going to do it in a way that doesn't punish the people who are going to pay the bill." "But the way you've been hammered on this, standing against the heroes of nine eleven…" replied Karl. "I'm used to being hammered," Coburn said. "I'm not standing against them at all. I'm standing for us as America, the realization that we have to do things efficiently and economically. We've worked out a deal now that spends a whole lot less money, accomplishes exactly the same thing, and does it in a way that protects our future. Every bill should have to go through that—and the fact that they don't is a problem. That's why we're fourteen trillion dollars in debt." "So I don't mind taking the heat," he continued. "You know, as a physician I care about those people. As a citizen, I care about the firefighters of my own city and every other city. The fact is you can still do it right. So you take all the heat, but you still get it done." "So what we need is more people taking more heat so we get the right things done." "Everybody's come to an agreement. We've got a handshake. We're waiting for the paperwork. And it'll be a done deal by the end of the day. But had we not done that, we wouldn't have had it and they wouldn't have had it until next year. So the fact is we accomplished their goals and we accomplished protecting the future of the country." "And saved some money?" he was asked. "We saved a lot of money." "A lot of people are sick and hurt and we need to take care of them. They deserve it. That's why it's such a bipartisan issue," Reid told ABC News when he emerged from the meeting. The compromise mandates the closing of the Victims Compensation Fund within five years, limits fees paid out to attorneys, and closes loopholes that allow people to re-file claims that have previously been rejected by the Fund, a source told ABC News. Ken Feinberg has been floated as a possible special master for the Fund, the source said. In the past few days, Coburn's threats to try to block Senate passage of the bill this year infuriated nine eleven first responders and lawmakers alike. "Where's his heart?" asked John Feal, founder of the FealGood Foundation, a nonprofit organization pushing for the bill's passage. "Because it's not in the right place." "These men and women behind me have gone eight Christmases suffering without any help from the federal government, so I question his heart." "I believe we have the votes to prevail," said Senator Charles Schumer at a press conference on Tuesday. "The only thing standing in our way is people who will try to run out the clock. That is not fair, not right, and that flies in the face of America. Enough, enough, enough with the delays!" Now the bill appears set to pass the Senate this afternoon after the chamber ratifies the START nuclear treaty with Russia. The nine slash eleven bill would then need to be passed by the House, which is expected to occur later this evening; if that happens, Congress could leave town for the Christmas break late Wednesday. "We are on the verge of a Christmas miracle," Senator Kirsten Gillibrand told reporters a few days ago, and that miracle now appears set to become reality. A New York City fireman called for more rescue workers to make their way into the rubble of the World Trade Center on September fifteen, two thousand one. A retooled bill providing medical care for firefighters and other emergency responders to the September eleven attacks could be resurrected soon in the Senate, a few weeks after Republicans previously blocked the measure, backers said. "We believe we are on a path to victory by the end of this week," Schumer added, though he warned that unexpected obstacles could arise. He and fellow New York senator Kirsten Gillibrand told reporters they will propose changes to their bill to win enough Republican support for passage as Congress winds down its legislative session for the year. They hope to do this by producing a less expensive bill that would pay for itself and leave a fifty-seven million dollar surplus over ten years.
long_en_301
wiki_en
756
en
The branches of science, also referred to as scientific fields or disciplines, are commonly divided into three major groups: formal sciences, natural sciences, and social sciences. Formal sciences study formal systems—such as logic, mathematics, theoretical computer science, information theory, systems theory, decision theory, and statistics—and use a priori methods rather than empirical ones. Natural sciences study natural phenomena, including cosmological, geological, physical, chemical, and biological aspects of the universe; they can be divided into physical sciences and life sciences (biology). Social sciences study human behavior in its social and cultural aspects. Scientific knowledge must be based on observable phenomena and be verifiable by other researchers working under the same conditions, although standards of verifiability may vary within disciplines. Natural, social, and formal sciences constitute the fundamental sciences that underlie interdisciplinary fields and applied sciences such as engineering and medicine. Some specialized disciplines span multiple categories, incorporating terminology and expertise from different fields while retaining their own specializations. Unlike other branches, the formal sciences are not concerned with the validity of theories based on observations in the real world (empirical knowledge), but rather with the properties of formal systems based on definitions and rules. Hence there is disagreement on whether the formal sciences actually constitute a science. Methods of the formal sciences, however, are essential to the construction and testing of scientific models dealing with observable reality, and major advances in the formal sciences have often enabled major advances in the empirical sciences. Logic is the systematic study of valid rules of inference, i.e., the relations that lead to the acceptance of one proposition (the conclusion) on the basis of a set of other propositions (premises). More broadly, logic is the analysis and appraisal of arguments. It has traditionally included the classification of arguments; the systematic exposition of logical forms; the validity and soundness of deductive reasoning; the strength of inductive reasoning; the study of formal proofs and inference (including paradoxes and fallacies); and the study of syntax and semantics. Historically, logic has been studied in philosophy (since ancient times) and mathematics (since the mid-19th century). More recently, logic has been studied in cognitive science, which draws on computer science, linguistics, philosophy, and psychology, among other disciplines. Information science: Information science is an academic field primarily concerned with the analysis, collection, classification, manipulation, storage, retrieval, movement, dissemination, and protection of information. Practitioners within and outside the field study the application and use of knowledge in organizations and the interaction between people, organizations, and information systems, aiming to create, replace, improve, or understand those systems. Mathematics: Mathematics, in the broadest sense, is a formal science; but traditionally it refers more specifically to the coalition of four areas: arithmetic, algebra, geometry, and analysis, which are, to some degree, the study of quantity, structure, space, and change, respectively. Statistics: Statistics is the study of the collection, organization, and interpretation of data. It deals with all aspects of this, including planning data collection through the design of surveys and experiments. A statistician is someone well versed in the thinking necessary for the successful application of statistical analysis; such people often gain this experience through work in a wide range of fields. There is also mathematical statistics, which is concerned with the theoretical basis of the subject. The word "statistics," when referring to the scientific discipline, is singular, as in "Statistics is an art." This should not be confused with the word "statistic," which refers to a quantity (such as the mean or median) calculated from a set of data, whose plural is "statistics" (e.g., "this statistic seems wrong" or "these statistics are misleading"). Systems theory is the transdisciplinary study of systems in general, aimed at elucidating principles that can be applied to all types of systems across fields of research. The term does not yet have a well-established, precise meaning, but systems theory can reasonably be considered a specialization of systems thinking and a generalization of systems science. The term originates from Bertalanffy's General System Theory (GST) and is used in later efforts in other fields, such as Talcott Parsons's action theory and Niklas Luhmann's sociological autopoiesis. In this context, the word "systems" refers specifically to self-regulating systems, i.e., systems that are self-correcting through feedback. Self-regulating systems are found in nature, including the physiological systems of the human body, local and global ecosystems, and the climate. Decision theory (or the theory of choice, not to be confused with choice theory) is the study of an agent's choices.
The branches of science, also referred to as scientific fields or disciplines, are commonly divided into three major groups: formal sciences, natural sciences, and social sciences. Formal sciences study formal systems—such as logic, mathematics, theoretical computer science, information theory, systems theory, decision theory, and statistics—and use a priori methods rather than empirical ones. Natural sciences study natural phenomena, including cosmological, geological, physical, chemical, and biological aspects of the universe; they can be divided into physical sciences and life sciences (biology). Social sciences study human behavior in its social and cultural aspects. Scientific knowledge must be based on observable phenomena and be verifiable by other researchers working under the same conditions, although standards of verifiability may vary within disciplines. Natural, social, and formal sciences constitute the fundamental sciences that underlie interdisciplinary fields and applied sciences such as engineering and medicine. Some specialized disciplines span multiple categories, incorporating terminology and expertise from different fields while retaining their own specializations. Unlike other branches, the formal sciences are not concerned with the validity of theories based on observations in the real world (empirical knowledge), but rather with the properties of formal systems based on definitions and rules. Hence there is disagreement on whether the formal sciences actually constitute a science. Methods of the formal sciences, however, are essential to the construction and testing of scientific models dealing with observable reality, and major advances in the formal sciences have often enabled major advances in the empirical sciences. Logic is the systematic study of valid rules of inference, i.e., the relations that lead to the acceptance of one proposition (the conclusion) on the basis of a set of other propositions (premises). More broadly, logic is the analysis and appraisal of arguments. It has traditionally included the classification of arguments; the systematic exposition of logical forms; the validity and soundness of deductive reasoning; the strength of inductive reasoning; the study of formal proofs and inference (including paradoxes and fallacies); and the study of syntax and semantics. Historically, logic has been studied in philosophy (since ancient times) and mathematics (since the mid-nineteenth century). More recently, logic has been studied in cognitive science, which draws on computer science, linguistics, philosophy, and psychology, among other disciplines. Information science: Information science is an academic field primarily concerned with the analysis, collection, classification, manipulation, storage, retrieval, movement, dissemination, and protection of information. Practitioners within and outside the field study the application and use of knowledge in organizations and the interaction between people, organizations, and information systems, aiming to create, replace, improve, or understand those systems. Mathematics: Mathematics, in the broadest sense, is a formal science; but traditionally it refers more specifically to the coalition of four areas: arithmetic, algebra, geometry, and analysis, which are, to some degree, the study of quantity, structure, space, and change, respectively. Statistics: Statistics is the study of the collection, organization, and interpretation of data. It deals with all aspects of this, including planning data collection through the design of surveys and experiments. A statistician is someone well versed in the thinking necessary for the successful application of statistical analysis; such people often gain this experience through work in a wide range of fields. There is also mathematical statistics, which is concerned with the theoretical basis of the subject. The word "statistics," when referring to the scientific discipline, is singular, as in "Statistics is an art." This should not be confused with the word "statistic," which refers to a quantity (such as the mean or median) calculated from a set of data, whose plural is "statistics" (e.g., "this statistic seems wrong" or "these statistics are misleading"). Systems theory is the transdisciplinary study of systems in general, aimed at elucidating principles that can be applied to all types of systems across fields of research. The term does not yet have a well-established, precise meaning, but systems theory can reasonably be considered a specialization of systems thinking and a generalization of systems science. The term originates from Bertalanffy's General System Theory (GST) and is used in later efforts in other fields, such as Talcott Parsons's action theory and Niklas Luhmann's sociological autopoiesis. In this context, the word "systems" refers specifically to self-regulating systems, i.e., systems that are self-correcting through feedback. Self-regulating systems are found in nature, including the physiological systems of the human body, local and global ecosystems, and the climate. Decision theory (or the theory of choice, not to be confused with choice theory) is the study of an agent's choices.
long_en_302
wiki_en
656
en
The School of Computing and Information Science and the Center for the Development of Information Technology (DIT) is the youngest school at Saint Louis University (SLU) and traces its roots to the vision of then Vice President for Finance and later University President Rev. Fr. Ghisleen de Vos (1976–1983). Forward-thinking and progressive, Fr. de Vos foresaw the automation of university systems such as accounting and enrollment at a time when computerization was uncommon in the country. With acquisitions of IBM systems in 1969 and 1980, SLU also catered to the computing needs of other institutions in the region. The SLU Computer Center handled these tasks until 1990, when it evolved into the Institute of Information and Computing Science and began offering a Computer Science program. The institute became a college in 1994, and the university’s computing and IT management was later devolved to the newly established MIS and SLU NET offices. Courses in Information Technology, Mathematics, Information Management, and Library and Information Science were added over time. Though new, the school was a trailblazer in IT education, and its advanced curriculum was strengthened through international linkages, faculty scholarships and training, and visits from international lecturers. The School hosted the first Northern Luzon international IT conference in 2007, with students, professionals, and experts from around the world in attendance. It has since conducted annual regional IT congresses showcasing research and projects from various universities and industries. As a Center of Development in IT education, the School continually introduces program innovations to meet current industry demands and required skills. The School's ICT Research Laboratory designed and manages the university's Learning Management System and the Research Digital Repository System, which serve as online repositories for course materials, research outputs, forums, and class records. The School has conducted and is completing studies in promising IT research areas such as natural language processing using local dialects (e.g., Ilokano and Tagalog), computational mathematics and algorithms, mobile and wireless computing, and the measurement of IT literacy and fluency. Professionals skilled in digital arts technologies are among the most in-demand workers across several industries. To meet this demand and in support of the Philippine government's call for HEIs to offer ladderized technical and vocational programs, the School offers short diploma programs in digital animation, multimedia systems, digital design, editing, and publishing. The latest addition to the School's graduate programs—the Master of Science in Service Management Engineering (MSSME)—makes SLU the first institution in the country to offer this emerging program. The degree aims to advance, manage, evaluate, and optimize systems in the global service industry. Developed in coordination with Professor Guido Dedene, a renowned IT expert, the program is multidisciplinary and includes courses from SLU's Schools of Engineering and Architecture and Accountancy and Business Management. The School was also selected by the Philippine Statistical Research and Training Center as a regional training center to accelerate national statistical capacity building. Beyond producing technologically skilled professionals, the School seeks to be socially relevant by sharing its expertise and resources. In 2007 it donated numerous computer units to Baguio City National High School (BCNHS) as part of a collaborative project with Belgium's Close the Gap (CTG) alliance, and it designed and conducted training programs for BCNHS teachers on computer and web-based applications. The School's future looks bright as it keeps pace with rapid modernization. The School of Computing and Information Sciences recognizes that the power to create, command, and control information technology comes with great responsibility. The School therefore focuses not only on setting new academic directions toward the advancement of IT and computing education and research, but also on advocating the ethical use of information and computing. SLU was the first institutional Internet service provider in Northern Luzon when it became a member — one of only ten in the country at the time — of the Philippine Network Foundation (PHNet) consortium in 1994.
The School of Computing and Information Science and the Center for the Development of Information Technology (DIT) is the youngest school at Saint Louis University (SLU) and traces its roots to the vision of then Vice President for Finance and later University President Rev. Fr. Ghisleen de Vos (nineteen seventy six–nineteen eighty three). Forward-thinking and progressive, Fr. de Vos foresaw the automation of university systems such as accounting and enrollment at a time when computerization was uncommon in the country. With acquisitions of IBM systems in nineteen sixty nine and nineteen eighty, SLU also catered to the computing needs of other institutions in the region. The SLU Computer Center handled these tasks until nineteen ninety, when it evolved into the Institute of Information and Computing Science and began offering a Computer Science program. The institute became a college in nineteen ninety four, and the university’s computing and IT management was later devolved to the newly established MIS and SLU NET offices. Courses in Information Technology, Mathematics, Information Management, and Library and Information Science were added over time. Though new, the school was a trailblazer in IT education, and its advanced curriculum was strengthened through international linkages, faculty scholarships and training, and visits from international lecturers. The School hosted the first Northern Luzon international IT conference in two thousand seven, with students, professionals, and experts from around the world in attendance. It has since conducted annual regional IT congresses showcasing research and projects from various universities and industries. As a Center of Development in IT education, the School continually introduces program innovations to meet current industry demands and required skills. The School's ICT Research Laboratory designed and manages the university's Learning Management System and the Research Digital Repository System, which serve as online repositories for course materials, research outputs, forums, and class records. The School has conducted and is completing studies in promising IT research areas such as natural language processing using local dialects (e.g., Ilokano and Tagalog), computational mathematics and algorithms, mobile and wireless computing, and the measurement of IT literacy and fluency. Professionals skilled in digital arts technologies are among the most in-demand workers across several industries. To meet this demand and in support of the Philippine government's call for HEIs to offer ladderized technical and vocational programs, the School offers short diploma programs in digital animation, multimedia systems, digital design, editing, and publishing. The latest addition to the School's graduate programs—the Master of Science in Service Management Engineering (MSSME)—makes SLU the first institution in the country to offer this emerging program. The degree aims to advance, manage, evaluate, and optimize systems in the global service industry. Developed in coordination with Professor Guido Dedene, a renowned IT expert, the program is multidisciplinary and includes courses from SLU's Schools of Engineering and Architecture and Accountancy and Business Management. The School was also selected by the Philippine Statistical Research and Training Center as a regional training center to accelerate national statistical capacity building. Beyond producing technologically skilled professionals, the School seeks to be socially relevant by sharing its expertise and resources. In two thousand seven it donated numerous computer units to Baguio City National High School (BCNHS) as part of a collaborative project with Belgium's Close the Gap (CTG) alliance, and it designed and conducted training programs for BCNHS teachers on computer and web-based applications. The School's future looks bright as it keeps pace with rapid modernization. The School of Computing and Information Sciences recognizes that the power to create, command, and control information technology comes with great responsibility. The School therefore focuses not only on setting new academic directions toward the advancement of IT and computing education and research, but also on advocating the ethical use of information and computing. SLU was the first institutional Internet service provider in Northern Luzon when it became a member — one of only ten in the country at the time — of the Philippine Network Foundation (PHNet) consortium in one thousand nine hundred ninety four.
long_en_170
paper_en
1,592
en
CosyVoice 2: Scalable Streaming Speech Synthesis with Large Language Models Abstract In our previous work, we introduced CosyVoice, a multilingual speech synthesis model based on supervised discrete speech tokens. By employing progressive semantic decoding with two popular generative models, language models (LMs) and Flow Matching, CosyVoice demonstrated high prosody naturalness, content consistency, and speaker similarity in speech in-context learning. Recently, significant progress has been made in multi-modal large language models (LLMs), where the response latency and real-time factor of speech synthesis play a crucial role in the interactive experience. Therefore, in this report, we present an improved streaming speech synthesis model, CosyVoice 2, which incorporates comprehensive and systematic optimizations. Specifically, we introduce finite-scalar quantization to improve the codebook utilization of speech tokens. For the text-speech LM, we streamline the model architecture to allow direct use of a pre-trained LLM as the backbone. In addition, we develop a chunk-aware causal flow matching model to support various synthesis scenarios, enabling both streaming and non-streaming synthesis within a single model. By training on a large-scale multilingual dataset, CosyVoice 2 achieves human-parity naturalness, minimal response latency, and virtually lossless synthesis quality in the streaming mode. We invite readers to listen to the demos online. Introduction In recent years, neural text-to-speech (TTS) synthesis models have garnered significant attention for surpassing traditional concatenative and statistical parametric methods. These models have achieved high fidelity and naturalness on pre-defined specific speakers. Recent studies show that zero-shot TTS models are able to synthesize speech for any speaker by imitating the timbre, prosody and style of a reference speech. Beyond their in-context learning (ICL) capability, zero-shot TTS models benefit from large-scale training data, achieving synthesis quality and naturalness nearly indistinguishable from human speech. Recent zero-shot TTS models can be broadly divided into three categories: codec language models, feature diffusion models and their hybrid systems. Codec language models utilize a speech codec model to extract discrete speech representation and employ an autoregressive or masked language model to predict the speech tokens, which are then synthesized to waveforms via codec vocoders. Continuous speech representations are also explored. Language model-based TTS can generate varied and prosody-consistent speech via autoregressive sampling. Inspired by advances in image generation, denoising diffusion and flow matching models have been introduced into non-autoregressive (NAR) speech synthesis. Early diffusion-based TTS models required duration prediction for each text (phone) to address the length disparity between text and speech features. However, this rigid alignment can affect naturalness, resulting in flat prosody. To mitigate this issue, cross-attention and Diffusion Transformers (DiT) have been introduced into NAR TTS models. Recent research indicates simpler approaches for text-speech alignment in NAR TTS models, such as E2 TTS, F5-TTS and Seed-TTS. In these models, input text is padded with special tokens to match the total speech length which is either automatically predicted by the utterance duration prediction module or specified by the user in advance. Since NAR TTS models are not constrained by codec vocoders, they can achieve superior speech quality. Hybrid systems combine the text-to-codec language model and codec-to-feature diffusion model. The language model addresses the alignment between text and speech as well as the utterance duration prediction, while the codec-to-feature diffusion model synthesizes speech features (Mel spectrum) based on the generated codec and other conditions. By leveraging the strengths of both generative models, hybrid systems achieve high diversity, prosody consistency and speech quality. Despite the success of recent zero-shot TTS models, they generally operate in non-streaming (offline) mode, which involves complete input text and requires synthesizing the entire utterance before returning the waveform. This results in high latency, negatively impacting user experience in applications like voice chat. To address this issue, streaming synthesis has been explored for language model-based zero-shot TTS models, but diffusion-based TTS models and hybrid systems lack well-established streaming solutions. Building on the success of CosyVoice, we introduce CosyVoice 2, a streaming zero-shot TTS model with improved prosody naturalness, content consistency, and speaker similarity. Our contributions include: unifying streaming and non-streaming synthesis in a single framework and proposing the unified text-speech language model and chunk-aware causal flow matching model, leading to lossless streaming synthesis compared to offline mode. We also simplified the LM architecture by removing the text encoder and speaker embedding, allowing pre-trained textual large language models (LLMs) to serve as the backbone, enhancing context understanding. We replaced vector quantization (VQ) in the speech tokenizer with finite scalar quantization (FSQ), improving codebook utilization and capturing more speech information. Finally, we upgraded the instructed TTS capacity to support more instructions, including emotion, accent, role style, and fine-grained control. In CosyVoice 2, the instruction and zero-shot capacity are integrated into a single model, enabling more versatile and vivid synthesis. Through the above systemic modification and optimization, CosyVoice 2 achieves human-parity synthesis quality and is nearly lossless in streaming mode. The unified framework loosens deployment requirements, enabling a single model to support both streaming and non-streaming synthesis. The upgraded instructed TTS capacity provides a more powerful and easier approach for users to generate various speeches. In addition, the chunk-aware flow matching design can also be applied to NAR TTS models, which suggests the potential for streaming NAR models. CosyVoice 2 CosyVoice 2 builds on the similar design philosophy of its predecessor by separating the semantic and acoustic information of speech signals and modeling them independently. The speech generation process is redefined as a gradual semantic decoding procedure, where conditional information is progressively incorporated. Specifically, the text-speech language model (LM) focuses solely on semantic information, decoding high-level text tokens into supervised semantic speech tokens. In the Flow Matching model, acoustic details, such as timbre, are introduced through speaker embeddings and reference speech, converting speech tokens into the Mel spectrum for a given speaker. Finally, a pre-trained vocoder model reinstates the phases, transforming the Mel spectrum back into the original audio signal. The following sections will introduce the details of CosyVoice 2 and the modifications for streaming synthesis from five respects: text tokenizer, supervised semantic speech tokenizer, unified text-speech LM for streaming/non-streaming synthesis and chunk-aware Flow Matching model. Text Tokenizer CosyVoice 2 uses the raw text as input directly, which is tokenized using a BPE-based text tokenizer. This eliminates the need for a frontend model that obtains phonemes via the grapheme-to-phoneme (g2p) transformation. This approach not only simplifies the data preprocessing workflow but also enables the model to learn the pronunciations of words within various contexts in an end-to-end manner. Unlike the tokenizers commonly used in textual LLMs, CosyVoice 2 masks out the one-to-many tokens. This prevents the pronunciation of a token from becoming excessively long and reduces corner cases caused by data sparsity. Specifically, if a BPE token encodes more than one Chinese character, it will be masked out, and each character will be encoded separately during the tokenization process. Other languages, such as English, Japanese, and Korean, are not subject to special handling. Supervised Semantic Speech Tokenizer We insert the finite scalar quantization (FSQ) module into the encoder of SenseVoice-Large ASR model. At the training stage, the input speech X goes through the Encoder 1 to obtain the intermediate representations, where Encoder 1 consists of six Transformer blocks with the rotary positional embedding. Then, the intermediate representations are fed into the FSQ module for quantization, and the quantized representations are passed through the rest of SenseVoice-Large modules, including Encoder 2 and ASR Decoder, to predict the posterior probabilities of corresponding text tokens. In the FSQ module, the intermediate representations H are firstly projected into a D-dimensional low-rank space, and the values of each dimension are quantized into a range of negative K to K with the bounded round operation. Then, the quantized low-rank representations are projected into the original dimension for the following modules. At the training stage, the straight-through estimation is used to approximate the gradients of FSQ module and Encoder 1. The speech token can be obtained by calculating the index of the quantized low-rank representation in a (2K+1)-ary system. The Encoder 1, low-rank projector of FSQ module, bounded round operation and index calculation form the speech tokenizer for CosyVoice 2. Our speech tokenizer works at a token rate of 25 Hz, i.e., 25 speech tokens per second. Unified Text-Speech Language Model In CosyVoice 2, the pre-trained textual LLM, Qwen2.5-0.5B, is used as the text-speech language model to generate the speech tokens autoregressively with the input text as a prompt. Similar to other LMs, the text-speech LM is also trained in a next-token-prediction scheme. Different from the previous CosyVoice, we remove the speaker embedding to avoid information leaking. More importantly, we find that such utterance-level vector contains not only speaker identify but also language and paralanguage information, which harms the prosody naturalness and cross-lingual capability of the text-speech LM. Besides, we also abandon the text encoder of the previous CosyVoice, since we find that the Qwen2.5-0.5B model is powerful enough to align the text and speech tokens, and the text encoder is no longer needed. Benefiting from the simplicity of text-speech LM, we can build a unified model for both streaming and non-streaming synthesis.
CosyVoice two: Scalable Streaming Speech Synthesis with Large Language Models Abstract In our previous work, we introduced CosyVoice, a multilingual speech synthesis model based on supervised discrete speech tokens. By employing progressive semantic decoding with two popular generative models, language models (LMs) and Flow Matching, CosyVoice demonstrated high prosody naturalness, content consistency, and speaker similarity in speech in-context learning. Recently, significant progress has been made in multi-modal large language models (LLMs), where the response latency and real-time factor of speech synthesis play a crucial role in the interactive experience. Therefore, in this report, we present an improved streaming speech synthesis model, CosyVoice two, which incorporates comprehensive and systematic optimizations. Specifically, we introduce finite-scalar quantization to improve the codebook utilization of speech tokens. For the text-speech LM, we streamline the model architecture to allow direct use of a pre-trained LLM as the backbone. In addition, we develop a chunk-aware causal flow matching model to support various synthesis scenarios, enabling both streaming and non-streaming synthesis within a single model. By training on a large-scale multilingual dataset, CosyVoice two achieves human-parity naturalness, minimal response latency, and virtually lossless synthesis quality in the streaming mode. We invite readers to listen to the demos online. Introduction In recent years, neural text-to-speech (TTS) synthesis models have garnered significant attention for surpassing traditional concatenative and statistical parametric methods. These models have achieved high fidelity and naturalness on pre-defined specific speakers. Recent studies show that zero-shot TTS models are able to synthesize speech for any speaker by imitating the timbre, prosody and style of a reference speech. Beyond their in-context learning (ICL) capability, zero-shot TTS models benefit from large-scale training data, achieving synthesis quality and naturalness nearly indistinguishable from human speech. Recent zero-shot TTS models can be broadly divided into three categories: codec language models, feature diffusion models and their hybrid systems. Codec language models utilize a speech codec model to extract discrete speech representation and employ an autoregressive or masked language model to predict the speech tokens, which are then synthesized to waveforms via codec vocoders. Continuous speech representations are also explored. Language model-based TTS can generate varied and prosody-consistent speech via autoregressive sampling. Inspired by advances in image generation, denoising diffusion and flow matching models have been introduced into non-autoregressive (NAR) speech synthesis. Early diffusion-based TTS models required duration prediction for each text (phone) to address the length disparity between text and speech features. However, this rigid alignment can affect naturalness, resulting in flat prosody. To mitigate this issue, cross-attention and Diffusion Transformers (DiT) have been introduced into NAR TTS models. Recent research indicates simpler approaches for text-speech alignment in NAR TTS models, such as E two TTS, F five-TTS and Seed-TTS. In these models, input text is padded with special tokens to match the total speech length which is either automatically predicted by the utterance duration prediction module or specified by the user in advance. Since NAR TTS models are not constrained by codec vocoders, they can achieve superior speech quality. Hybrid systems combine the text-to-codec language model and codec-to-feature diffusion model. The language model addresses the alignment between text and speech as well as the utterance duration prediction, while the codec-to-feature diffusion model synthesizes speech features (Mel spectrum) based on the generated codec and other conditions. By leveraging the strengths of both generative models, hybrid systems achieve high diversity, prosody consistency and speech quality. Despite the success of recent zero-shot TTS models, they generally operate in non-streaming (offline) mode, which involves complete input text and requires synthesizing the entire utterance before returning the waveform. This results in high latency, negatively impacting user experience in applications like voice chat. To address this issue, streaming synthesis has been explored for language model-based zero-shot TTS models, but diffusion-based TTS models and hybrid systems lack well-established streaming solutions. Building on the success of CosyVoice, we introduce CosyVoice two, a streaming zero-shot TTS model with improved prosody naturalness, content consistency, and speaker similarity. Our contributions include: unifying streaming and non-streaming synthesis in a single framework and proposing the unified text-speech language model and chunk-aware causal flow matching model, leading to lossless streaming synthesis compared to offline mode. We also simplified the LM architecture by removing the text encoder and speaker embedding, allowing pre-trained textual large language models (LLMs) to serve as the backbone, enhancing context understanding. We replaced vector quantization (VQ) in the speech tokenizer with finite scalar quantization (FSQ), improving codebook utilization and capturing more speech information. Finally, we upgraded the instructed TTS capacity to support more instructions, including emotion, accent, role style, and fine-grained control. In CosyVoice two, the instruction and zero-shot capacity are integrated into a single model, enabling more versatile and vivid synthesis. Through the above systemic modification and optimization, CosyVoice two achieves human-parity synthesis quality and is nearly lossless in streaming mode. The unified framework loosens deployment requirements, enabling a single model to support both streaming and non-streaming synthesis. The upgraded instructed TTS capacity provides a more powerful and easier approach for users to generate various speeches. In addition, the chunk-aware flow matching design can also be applied to NAR TTS models, which suggests the potential for streaming NAR models. CosyVoice two CosyVoice two builds on the similar design philosophy of its predecessor by separating the semantic and acoustic information of speech signals and modeling them independently. The speech generation process is redefined as a gradual semantic decoding procedure, where conditional information is progressively incorporated. Specifically, the text-speech language model (LM) focuses solely on semantic information, decoding high-level text tokens into supervised semantic speech tokens. In the Flow Matching model, acoustic details, such as timbre, are introduced through speaker embeddings and reference speech, converting speech tokens into the Mel spectrum for a given speaker. Finally, a pre-trained vocoder model reinstates the phases, transforming the Mel spectrum back into the original audio signal. The following sections will introduce the details of CosyVoice two and the modifications for streaming synthesis from five respects: text tokenizer, supervised semantic speech tokenizer, unified text-speech LM for streaming/non-streaming synthesis and chunk-aware Flow Matching model. Text Tokenizer CosyVoice two uses the raw text as input directly, which is tokenized using a BPE-based text tokenizer. This eliminates the need for a frontend model that obtains phonemes via the grapheme-to-phoneme (g two p) transformation. This approach not only simplifies the data preprocessing workflow but also enables the model to learn the pronunciations of words within various contexts in an end-to-end manner. Unlike the tokenizers commonly used in textual LLMs, CosyVoice two masks out the one-to-many tokens. This prevents the pronunciation of a token from becoming excessively long and reduces corner cases caused by data sparsity. Specifically, if a BPE token encodes more than one Chinese character, it will be masked out, and each character will be encoded separately during the tokenization process. Other languages, such as English, Japanese, and Korean, are not subject to special handling. Supervised Semantic Speech Tokenizer We insert the finite scalar quantization (FSQ) module into the encoder of SenseVoice-Large ASR model. At the training stage, the input speech X goes through the Encoder one to obtain the intermediate representations, where Encoder one consists of six Transformer blocks with the rotary positional embedding. Then, the intermediate representations are fed into the FSQ module for quantization, and the quantized representations are passed through the rest of SenseVoice-Large modules, including Encoder two and ASR Decoder, to predict the posterior probabilities of corresponding text tokens. In the FSQ module, the intermediate representations H are firstly projected into a D-dimensional low-rank space, and the values of each dimension are quantized into a range of negative K to K with the bounded round operation. Then, the quantized low-rank representations are projected into the original dimension for the following modules. At the training stage, the straight-through estimation is used to approximate the gradients of FSQ module and Encoder one. The speech token can be obtained by calculating the index of the quantized low-rank representation in a (two K plus one)-ary system. The Encoder one, low-rank projector of FSQ module, bounded round operation and index calculation form the speech tokenizer for CosyVoice two. Our speech tokenizer works at a token rate of twenty five hertz, i.e., twenty five speech tokens per second. Unified Text-Speech Language Model In CosyVoice two, the pre-trained textual LLM, Qwen two point five- zero point five B, is used as the text-speech language model to generate the speech tokens autoregressively with the input text as a prompt. Similar to other LMs, the text-speech LM is also trained in a next-token-prediction scheme. Different from the previous CosyVoice, we remove the speaker embedding to avoid information leaking. More importantly, we find that such utterance-level vector contains not only speaker identify but also language and paralanguage information, which harms the prosody naturalness and cross-lingual capability of the text-speech LM. Besides, we also abandon the text encoder of the previous CosyVoice, since we find that the Qwen two point five-zero point five B model is powerful enough to align the text and speech tokens, and the text encoder is no longer needed. Benefiting from the simplicity of text-speech LM, we can build a unified model for both streaming and non-streaming synthesis.
long_en_283
wiki_en
977
en
Climate change in New Mexico reflects the effects of human-caused increases in atmospheric carbon dioxide. According to the U.S. Environmental Protection Agency, most of the state has warmed at least one degree Fahrenheit over the last century; across the southwestern United States, heat waves are becoming more common and snow is melting earlier in spring. In coming decades, climate change is likely to reduce flows in the Colorado, Rio Grande, and other rivers; threaten livestock health; increase the frequency and intensity of wildfires; and convert some rangelands to desert. New Mexico is experiencing higher temperatures and a drier climate. Snowpack: As the climate warms, less precipitation falls as snow and more snow melts during winter, so the amount of accumulated snow is decreasing. Since the 1950s, snowpack has declined in New Mexico and in Colorado, Utah, and Wyoming—the headwaters of the Rio Grande, San Juan, Colorado, and Navajo rivers. Diminishing snowpack in northern New Mexico will shorten the season for skiing and other winter recreation and may allow subalpine fir and other high-altitude trees to grow at higher elevations, shifting the tree line. A higher tree line would decrease the extent of alpine tundra ecosystems, which could threaten some species. Water availability: the changing climate is likely to increase the need for water but reduce the supply. Warmer temperatures increase the rate at which water evaporates (or transpires) into the air from soils, plants, and surface waters, so irrigated farmland would need more water. But less water is likely to be available because precipitation is unlikely to increase enough to make up for the additional water lost to evaporation. Annual rainfall is more likely to decrease than increase, so soils are likely to be drier and periods without rain longer, making droughts more severe. The decline in snowpack could further limit the supply of water for some purposes. Mountain snowpacks are natural reservoirs: they collect winter snow and release water when the snow melts in spring and summer. Over the past 50 years, snowpack has been melting earlier in the year. Dams capture most meltwater and retain it for use later in the year, but upstream of these reservoirs less water is available during droughts for ecosystems, fish, water-based recreation, and landowners who draw water directly from flowing rivers. Due to climate change, New Mexico's water resources have declined. In 2019, the Center for Biological Diversity named New Mexico's Gila River the nation's most endangered river because of climate change. Agriculture: Increasing droughts and higher temperatures are likely to interfere with New Mexico’s farms and cattle ranches. Hot weather can threaten cows’ health and cause them to eat less, grow more slowly, and produce less milk. Livestock operations could also be impaired by fire and changes in the landscape from grassland to woody shrubs more typical of a desert. Reduced water availability would create challenges for ranchers, as well as farmers who irrigate fruits, vegetables, pecans, and other nut trees. Wildfires and changing landscapes: Higher temperatures and drought are likely to increase the severity, frequency, and extent of wildfires, which could harm property, livelihoods, and human health. On average, more than 2 percent of the land in New Mexico has burned per decade since 1984. Wildfire smoke can reduce air quality and increase medical visits for chest pains, respiratory problems, and heart problems. The combination of more fires and drier conditions may expand deserts and otherwise change parts of New Mexico’s landscape. Many plants and animals living in arid lands are already near the limits of what they can tolerate. A warmer, drier climate would generally extend the Chihuahuan Desert to higher elevations and expand its geographic range. In some cases, native vegetation may persist and delay or prevent expansion of the desert. In other cases, fires or livestock grazing may accelerate the conversion of grassland to desert in response to the changing climate. For similar reasons, some forests may change to desert or grassland. Pests: Warmer, drier conditions make forests more susceptible to pests. Drought reduces trees' ability to defend against attacks from pests such as bark beetles, which have infested 200,000 acres in New Mexico. Temperature controls the life cycles and winter mortality rates of many pests. With higher winter temperatures, some pests can persist year-round, and new pests and diseases may become established. Extreme heat: Hot days can be unhealthy—even dangerous. Certain people are especially vulnerable, including children, the elderly, the sick, and the poor. High air temperatures can cause heat stroke and dehydration, and affect cardiovascular, respiratory, and nervous systems. Higher temperatures are amplified in urban settings where paved and other surfaces store heat. Warmer air can also increase the formation of ground-level ozone, a key component of smog. Construction crews may increasingly operate on altered schedules to avoid the heat of the day. New Mexico is part of the Southwest region. Since the 1970s, New Mexico's average temperature has risen by 2.7 degrees Fahrenheit, in part because of increased greenhouse gas emissions. This makes the Southwest the hottest and driest region in the United States. Climate change threatens the natural resources and public health of tribal communities. Rising temperatures and increasing drought are likely to decrease the availability of certain fish, game, and wild plants on which the Navajo and other tribes have relied for generations. Water may be less available for domestic consumption, especially for those who are not served by municipal systems or reliable wells; about 30 percent of the people on the Navajo Nation must haul water to meet daily needs. Recurring drought and rising temperatures may also degrade the land itself. On the Arizona portion of the Navajo Nation, for example, the Great Falls Dune Field has advanced almost a mile in the last 60 years, threatening roads, homes, and grazing areas.
Climate change in New Mexico reflects the effects of human-caused increases in atmospheric carbon dioxide. According to the U.S. Environmental Protection Agency, most of the state has warmed at least one degree Fahrenheit over the last century; across the southwestern United States, heat waves are becoming more common and snow is melting earlier in spring. In coming decades, climate change is likely to reduce flows in the Colorado, Rio Grande, and other rivers; threaten livestock health; increase the frequency and intensity of wildfires; and convert some rangelands to desert. New Mexico is experiencing higher temperatures and a drier climate. Snowpack: As the climate warms, less precipitation falls as snow and more snow melts during winter, so the amount of accumulated snow is decreasing. Since the nineteen fifties, snowpack has declined in New Mexico and in Colorado, Utah, and Wyoming—the headwaters of the Rio Grande, San Juan, Colorado, and Navajo rivers. Diminishing snowpack in northern New Mexico will shorten the season for skiing and other winter recreation and may allow subalpine fir and other high-altitude trees to grow at higher elevations, shifting the tree line. A higher tree line would decrease the extent of alpine tundra ecosystems, which could threaten some species. Water availability: the changing climate is likely to increase the need for water but reduce the supply. Warmer temperatures increase the rate at which water evaporates (or transpires) into the air from soils, plants, and surface waters, so irrigated farmland would need more water. But less water is likely to be available because precipitation is unlikely to increase enough to make up for the additional water lost to evaporation. Annual rainfall is more likely to decrease than increase, so soils are likely to be drier and periods without rain longer, making droughts more severe. The decline in snowpack could further limit the supply of water for some purposes. Mountain snowpacks are natural reservoirs: they collect winter snow and release water when the snow melts in spring and summer. Over the past fifty years, snowpack has been melting earlier in the year. Dams capture most meltwater and retain it for use later in the year, but upstream of these reservoirs less water is available during droughts for ecosystems, fish, water-based recreation, and landowners who draw water directly from flowing rivers. Due to climate change, New Mexico's water resources have declined. In two thousand nineteen, the Center for Biological Diversity named New Mexico's Gila River the nation's most endangered river because of climate change. Agriculture: Increasing droughts and higher temperatures are likely to interfere with New Mexico’s farms and cattle ranches. Hot weather can threaten cows’ health and cause them to eat less, grow more slowly, and produce less milk. Livestock operations could also be impaired by fire and changes in the landscape from grassland to woody shrubs more typical of a desert. Reduced water availability would create challenges for ranchers, as well as farmers who irrigate fruits, vegetables, pecans, and other nut trees. Wildfires and changing landscapes: Higher temperatures and drought are likely to increase the severity, frequency, and extent of wildfires, which could harm property, livelihoods, and human health. On average, more than two percent of the land in New Mexico has burned per decade since nineteen eighty four. Wildfire smoke can reduce air quality and increase medical visits for chest pains, respiratory problems, and heart problems. The combination of more fires and drier conditions may expand deserts and otherwise change parts of New Mexico’s landscape. Many plants and animals living in arid lands are already near the limits of what they can tolerate. A warmer, drier climate would generally extend the Chihuahuan Desert to higher elevations and expand its geographic range. In some cases, native vegetation may persist and delay or prevent expansion of the desert. In other cases, fires or livestock grazing may accelerate the conversion of grassland to desert in response to the changing climate. For similar reasons, some forests may change to desert or grassland. Pests: Warmer, drier conditions make forests more susceptible to pests. Drought reduces trees' ability to defend against attacks from pests such as bark beetles, which have infested two hundred thousand acres in New Mexico. Temperature controls the life cycles and winter mortality rates of many pests. With higher winter temperatures, some pests can persist year-round, and new pests and diseases may become established. Extreme heat: Hot days can be unhealthy—even dangerous. Certain people are especially vulnerable, including children, the elderly, the sick, and the poor. High air temperatures can cause heat stroke and dehydration, and affect cardiovascular, respiratory, and nervous systems. Higher temperatures are amplified in urban settings where paved and other surfaces store heat. Warmer air can also increase the formation of ground-level ozone, a key component of smog. Construction crews may increasingly operate on altered schedules to avoid the heat of the day. New Mexico is part of the Southwest region. Since the nineteen seventies, New Mexico's average temperature has risen by two point seven degrees Fahrenheit, in part because of increased greenhouse gas emissions. This makes the Southwest the hottest and driest region in the United States. Climate change threatens the natural resources and public health of tribal communities. Rising temperatures and increasing drought are likely to decrease the availability of certain fish, game, and wild plants on which the Navajo and other tribes have relied for generations. Water may be less available for domestic consumption, especially for those who are not served by municipal systems or reliable wells; about thirty percent of the people on the Navajo Nation must haul water to meet daily needs. Recurring drought and rising temperatures may also degrade the land itself. On the Arizona portion of the Navajo Nation, for example, the Great Falls Dune Field has advanced almost a mile in the last sixty years, threatening roads, homes, and grazing areas.
long_en_299
wiki_en
855
en
Space Funeral is an independently created role-playing video game and art game by Irish developer Stephen Gillmurphy (thecatamites). The short game was created using RPG Maker 2003 and centers on a boy named Phillip, who leaves home to save his world from a mysterious corruption. Space Funeral is notable for its parodies of the horror and role-playing game genres, its crude art style, and its frequent use of blood in dialogue, graphics, and themes. Players primarily control Phillip; after the game's first major area they also control Leg Horse. The journey begins in Phillip's home, Scum Vullage, as he searches for the City of Forms, described as the origin of everything in the game's world. Throughout the game, players encounter twisted, often bloody creatures that take the place of non-playable characters (NPCs). The gameplay resembles a typical 2D, turn-based role-playing game (RPG), but includes a "Mystery" function that can be used only once per battle and has effects unique to each enemy type. This ability closely resembles "praying" in EarthBound but with a greater and more prevalent narrative focus. Certain enemies have absurd weaknesses—for example, exposure to silent films can make some of them sentimental. The game also has quirky status effects, such as "buff" and "sad," along with more typical ones like "poisoned." Space Funeral begins with Phillip, a perpetually crying, pajama-clad purple boy, seeing a wizard in Scum Vullage who tells him that his world has been corrupted and does not have much time left. The wizard says the only hope for survival is to find the City of Forms, a perfect city from which all things in the game's world originate. Phillip leaves the village and soon meets Leg Horse, a horse made of severed legs who is later revealed to have been Prince Horace, the former ruler before the "Great Change." The player passes through the Blood Cavern, home to the game's first boss, the Blood Ghoul, and arrives in the City of Thieves, inhabited by a vast array of criminals who are especially vulnerable to bibles. Visiting Leg Horse's former home reveals that his brother has also been corrupted, and the group is ambushed by "20th Century Boy," the corrupted version of his brother. Phillip notices that certain objects appear as graphical glitches—called "errors" by the denizens—who recognize them but cannot remember what they once were before the Great Change. After players defeat the King of Crime, they reach the City of Forms, an intensely glitched area that resembles a video game debug room. The "forms" refer to the game's sprites. There they discover Moon, a former artist who first sought the City for inspiration. She found it so perfect that she lost her purpose as an artist and decided to corrupt the world so she could create again. After defeating Moon, the game returns to the default RPG Maker appearance and the characters revert to their normal selves, implying the previous appearance was the result of Moon's corruption. However, a corrupted house from the original world remains, suggesting it is not entirely gone. Development: In an interview with the Tumblr blog fuckyeahspacefuneral, developer thecatamites said the game's art style was "based on the weird chunky pixel gore from Monster Party, especially the way it could be hard to figure out what a wall of tiled bloody heads was meant to represent in game space." The developer said the music was chosen by "pulling together things based on kind of superficially similar tendencies and almost creating a fake tradition in that way which could change how you progress from there." They also said Space Funeral was largely inspired by the games Bat Castle and Monster Party. In a different interview, Murphy claimed to have played Earthbound in his teens but added it was "kinda too late for it to have an impact" at the time of development. Several websites have listed Space Funeral among games very reminiscent of Earthbound, including that interviewer's own site, Paste. The game's focus on "corruption" versus the "perfect" graphics of the default RPG Maker was used by the developer to decry what he believed was a modern form of classicism for the RPGs of the 1990s, where people believed they were the "peak" of video games and wanted to copy them as much as possible instead of experimenting and trying new things. He believed that when RPG Maker games break from tradition, they are more interesting. Reception: Space Funeral received positive critical reviews, with critics citing the game's unusual art style, music, and setting. Filipe Salgado of Kill Screen rated the game 75/100, saying its "messiness" contrasted with the tendency of most games to tie up loose ends, despite its underlying systems being the same as any Japanese role-playing game. Quintin Smith of Rock, Paper, Shotgun described the game as "Final Fantasy directed by Alejandro Jodorowsky," calling the art "disturbed" and the music "awesome." Space Funeral gained a cult following over time, partially as a result of attention from YouTubers and streamers.
Space Funeral is an independently created role-playing video game and art game by Irish developer Stephen Gillmurphy (thecatamites). The short game was created using RPG Maker two thousand three and centers on a boy named Phillip, who leaves home to save his world from a mysterious corruption. Space Funeral is notable for its parodies of the horror and role-playing game genres, its crude art style, and its frequent use of blood in dialogue, graphics, and themes. Players primarily control Phillip; after the game's first major area they also control Leg Horse. The journey begins in Phillip's home, Scum Vullage, as he searches for the City of Forms, described as the origin of everything in the game's world. Throughout the game, players encounter twisted, often bloody creatures that take the place of non-playable characters (NPCs). The gameplay resembles a typical two D, turn-based role-playing game (RPG), but includes a "Mystery" function that can be used only once per battle and has effects unique to each enemy type. This ability closely resembles "praying" in EarthBound but with a greater and more prevalent narrative focus. Certain enemies have absurd weaknesses—for example, exposure to silent films can make some of them sentimental. The game also has quirky status effects, such as "buff" and "sad," along with more typical ones like "poisoned." Space Funeral begins with Phillip, a perpetually crying, pajama-clad purple boy, seeing a wizard in Scum Vullage who tells him that his world has been corrupted and does not have much time left. The wizard says the only hope for survival is to find the City of Forms, a perfect city from which all things in the game's world originate. Phillip leaves the village and soon meets Leg Horse, a horse made of severed legs who is later revealed to have been Prince Horace, the former ruler before the "Great Change." The player passes through the Blood Cavern, home to the game's first boss, the Blood Ghoul, and arrives in the City of Thieves, inhabited by a vast array of criminals who are especially vulnerable to bibles. Visiting Leg Horse's former home reveals that his brother has also been corrupted, and the group is ambushed by "twentieth Century Boy," the corrupted version of his brother. Phillip notices that certain objects appear as graphical glitches—called "errors" by the denizens—who recognize them but cannot remember what they once were before the Great Change. After players defeat the King of Crime, they reach the City of Forms, an intensely glitched area that resembles a video game debug room. The "forms" refer to the game's sprites. There they discover Moon, a former artist who first sought the City for inspiration. She found it so perfect that she lost her purpose as an artist and decided to corrupt the world so she could create again. After defeating Moon, the game returns to the default RPG Maker appearance and the characters revert to their normal selves, implying the previous appearance was the result of Moon's corruption. However, a corrupted house from the original world remains, suggesting it is not entirely gone. Development: In an interview with the Tumblr blog fuckyeahspacefuneral, developer thecatamites said the game's art style was "based on the weird chunky pixel gore from Monster Party, especially the way it could be hard to figure out what a wall of tiled bloody heads was meant to represent in game space." The developer said the music was chosen by "pulling together things based on kind of superficially similar tendencies and almost creating a fake tradition in that way which could change how you progress from there." They also said Space Funeral was largely inspired by the games Bat Castle and Monster Party. In a different interview, Murphy claimed to have played Earthbound in his teens but added it was "kinda too late for it to have an impact" at the time of development. Several websites have listed Space Funeral among games very reminiscent of Earthbound, including that interviewer's own site, Paste. The game's focus on "corruption" versus the "perfect" graphics of the default RPG Maker was used by the developer to decry what he believed was a modern form of classicism for the RPGs of the nineteen nineties, where people believed they were the "peak" of video games and wanted to copy them as much as possible instead of experimenting and trying new things. He believed that when RPG Maker games break from tradition, they are more interesting. Reception: Space Funeral received positive critical reviews, with critics citing the game's unusual art style, music, and setting. Filipe Salgado of Kill Screen rated the game seventy five/one hundred, saying its "messiness" contrasted with the tendency of most games to tie up loose ends, despite its underlying systems being the same as any Japanese role-playing game. Quintin Smith of Rock, Paper, Shotgun described the game as "Final Fantasy directed by Alejandro Jodorowsky," calling the art "disturbed" and the music "awesome." Space Funeral gained a cult following over time, partially as a result of attention from YouTubers and streamers.
long_en_326
poet_en
789
en
Other than trust in your scene partner, another crucial skill for an actor on stage is improvisation—the ability to think on their feet. It's human to forget lines or directions; after all, we are not robots with photographic memories. When that happens, you need to improvise. You might think improvisation is unrelated to your scene partners, but you'd be wrong: it cannot be done alone. If you go off the lines and your partner does not cooperate, it will be obvious something is wrong. That is why trust exercises are so important. A good partner can make or break the whole show. During improvisation, the partner's role is to keep the scene going; if you know your partner's lines, try to feed them in a natural way, for example by phrasing them as a question. Mr. Sam paused to think of an example before continuing: "For example, the line your partner has forgotten could be, 'I have to go to the market this afternoon.' When you see your partner hesitating or improvising, you can say something like, 'Yes, by the way, aren't you supposed to be going to the market this afternoon?'" Do not correct your partner on the spot because that will tip the audience off to the mistake, and that is the one thing you mustn't do on stage—unless, of course, it is a purposeful plot device. “There is only one key rule to improvisation: always say yes. Don’t stop your partner’s flow or the momentum of the show; keep the scene going. If you’re good and natural, the audience might not even realize you’re improvising — and sometimes the best material comes from improv.” That was all Mr. Sam could offer on the theory of the exercise. He could talk all day about theory, but acting required practical lessons, so he moved on to the next step. “Alright, to keep things moving, work with the partner you have. There’s no script — make up any scene and continue from there. Remember, for improvisation, always say yes!” Jun Yang had no idea his English teacher was so knowledgeable about the arts and acting. Then again, he thought the teacher simply hadn't had an opportunity to show that side of himself. That, however, wasn't his concern. He turned to the fiendishly handsome young man standing next to him. The grin looked innocent, but Jun Yang wasn't fooled; he could see the scheming behind it. To stop Feng Qi from derailing the exercise, Jun Yang knew he had to take the lead. If he set the theme and flow of the scene, it would be harder for Feng Qi to steer it someplace strange. "Yes, that was the plan!" With that in mind, Jun Yang thought quickly and began before Feng Qi had a chance to speak. "Hey, how's it going, Feng Qi? Enjoying your day at the market? The weather's nice, isn't it?" It wasn't particularly original—he had copied Mr. Sam's prompt—but at least it did the trick. There was no way Feng Qi could turn a chance encounter at the market into something else—or so Jun Yang thought. Feng Qi smiled and, following the cue, continued, "Oh yes. The weather today is wonderful. I am definitely enjoying myself." He paused to let the message sink in, then added, "How about you, Jun Yang? Are you enjoying a great day as well?" "Yes, I am…" Jun Yang said, and then he was at a loss. He hadn't prepared what to say next; he'd been so focused on setting up a normal scene that he'd forgotten how to keep it going. Like a panther sensing an opening in its prey, Feng Qi pounced with a wickedly charming line: "Oh? Is that because of the company you have today? I had no idea you enjoyed my company so much…" Jun Yang was flustered—did he have to admit that? Why was this happening? It crossed his mind that he had been too passive. Maybe offense really was the best defense. He was too busy trying to defuse Feng Qi's charms, and perhaps that wasn't the way to go. Wanting to switch tactics, Jun Yang tossed the question back to Feng Qi. "Yes, I'm enjoying the company I have." Jun Yang knew that was part of the improvisation exercise, but when he said it he still blushed a little. "But how about you? Are you enjoying the company you're with?" Feng Qi raised his brows. He had expected Jun Yang to squirm and change the subject, so this aggressive counter surprised him.
Other than trust in your scene partner, another crucial skill for an actor on stage is improvisation—the ability to think on their feet. It's human to forget lines or directions; after all, we are not robots with photographic memories. When that happens, you need to improvise. You might think improvisation is unrelated to your scene partners, but you'd be wrong: it cannot be done alone. If you go off the lines and your partner does not cooperate, it will be obvious something is wrong. That is why trust exercises are so important. A good partner can make or break the whole show. During improvisation, the partner's role is to keep the scene going; if you know your partner's lines, try to feed them in a natural way, for example by phrasing them as a question. Mr. Sam paused to think of an example before continuing: "For example, the line your partner has forgotten could be, 'I have to go to the market this afternoon.' When you see your partner hesitating or improvising, you can say something like, 'Yes, by the way, aren't you supposed to be going to the market this afternoon?'" Do not correct your partner on the spot because that will tip the audience off to the mistake, and that is the one thing you mustn't do on stage—unless, of course, it is a purposeful plot device. “There is only one key rule to improvisation: always say yes. Don’t stop your partner’s flow or the momentum of the show; keep the scene going. If you’re good and natural, the audience might not even realize you’re improvising — and sometimes the best material comes from improv.” That was all Mr. Sam could offer on the theory of the exercise. He could talk all day about theory, but acting required practical lessons, so he moved on to the next step. “Alright, to keep things moving, work with the partner you have. There’s no script — make up any scene and continue from there. Remember, for improvisation, always say yes!” Jun Yang had no idea his English teacher was so knowledgeable about the arts and acting. Then again, he thought the teacher simply hadn't had an opportunity to show that side of himself. That, however, wasn't his concern. He turned to the fiendishly handsome young man standing next to him. The grin looked innocent, but Jun Yang wasn't fooled; he could see the scheming behind it. To stop Feng Qi from derailing the exercise, Jun Yang knew he had to take the lead. If he set the theme and flow of the scene, it would be harder for Feng Qi to steer it someplace strange. "Yes, that was the plan!" With that in mind, Jun Yang thought quickly and began before Feng Qi had a chance to speak. "Hey, how's it going, Feng Qi? Enjoying your day at the market? The weather's nice, isn't it?" It wasn't particularly original—he had copied Mr. Sam's prompt—but at least it did the trick. There was no way Feng Qi could turn a chance encounter at the market into something else—or so Jun Yang thought. Feng Qi smiled and, following the cue, continued, "Oh yes. The weather today is wonderful. I am definitely enjoying myself." He paused to let the message sink in, then added, "How about you, Jun Yang? Are you enjoying a great day as well?" "Yes, I am…" Jun Yang said, and then he was at a loss. He hadn't prepared what to say next; he'd been so focused on setting up a normal scene that he'd forgotten how to keep it going. Like a panther sensing an opening in its prey, Feng Qi pounced with a wickedly charming line: "Oh? Is that because of the company you have today? I had no idea you enjoyed my company so much…" Jun Yang was flustered—did he have to admit that? Why was this happening? It crossed his mind that he had been too passive. Maybe offense really was the best defense. He was too busy trying to defuse Feng Qi's charms, and perhaps that wasn't the way to go. Wanting to switch tactics, Jun Yang tossed the question back to Feng Qi. "Yes, I'm enjoying the company I have." Jun Yang knew that was part of the improvisation exercise, but when he said it he still blushed a little. "But how about you? Are you enjoying the company you're with?" Feng Qi raised his brows. He had expected Jun Yang to squirm and change the subject, so this aggressive counter surprised him.
long_en_295
wiki_en
817
en
The geology of Mars is the scientific study of the planet’s surface, crust, and interior. It emphasizes the composition, structure, history, and physical processes that shape the planet and is analogous to terrestrial geology. In planetary science, the term geology is used in its broadest sense to refer to the study of the solid parts of planets and moons, incorporating aspects of geophysics, geochemistry, mineralogy, geodesy, and cartography. A neologism, areology (from the Greek Arēs, meaning Mars), sometimes appears as a synonym for Mars's geology in popular media and science fiction; the term is also used by the Areological Society. Mars is a terrestrial planet that has undergone planetary differentiation. The InSight lander, which touched down on 26 November 2018, deployed a sensitive seismometer to enable three-dimensional mapping of the deep interior. Using information from InSight, scientists reported on 25 October 2023 that Mars has a radioactive magma ocean beneath its crust. Global physiography: Mars exhibits a number of distinct large-scale surface features that indicate the geological processes that have operated on the planet over time. This section introduces several of the larger physiographic regions of Mars. Together, these regions illustrate how geologic processes involving volcanism, tectonism, water, ice, and impacts have shaped the planet on a global scale. Hemispheric dichotomy The northern and southern hemispheres of Mars are strikingly different in topography and physiography. This dichotomy is a fundamental global geologic feature of the planet. The northern part is an enormous topographic depression: about one-third of the surface (mostly in the northern hemisphere) lies 3–6 km lower in elevation than the southern two-thirds. This is a first-order relief feature on par with the elevation difference between Earth's continents and ocean basins. The dichotomy is also expressed in two other ways: as a difference in impact crater density and in crustal thickness between the two hemispheres. The hemisphere south of the dichotomy boundary (often called the southern highlands or uplands) is very heavily cratered and ancient, characterized by rugged surfaces that date back to the period of heavy bombardment. In contrast, the lowlands north of the dichotomy boundary have few large craters, are very smooth and flat, and show features indicating extensive resurfacing since the southern highlands formed. Topographic and geophysical gravity data indicate that the crust in the southern highlands has a maximum thickness of about [value], whereas the crust in the northern lowlands peaks at around [value] in thickness. The location of the dichotomy boundary varies in latitude across Mars and depends on which of the three physical expressions of the dichotomy is being considered. The origin and age of the hemispheric dichotomy are still debated. Hypotheses generally fall into two categories: exogenic theories, which propose the dichotomy was produced by a mega-impact event or several large impacts early in the planet's history; and endogenic theories, which propose the dichotomy resulted from crustal thinning in the northern hemisphere due to mantle convection, overturning, or other chemical and thermal processes in the planet's interior. One endogenic model proposes an early episode of plate tectonics producing a thinner crust in the north, similar to spreading plate boundaries on Earth. Whatever its origin, the Martian dichotomy appears to be extremely old. A recent theory based on the Southern Polar Giant Impact and supported by the discovery of twelve hemispherical alignments suggests exogenic mechanisms are more likely than endogenic ones and that Mars never had plate tectonics capable of modifying the dichotomy. Laser altimeters and radar-sounding data from orbiting spacecraft have identified a large number of basin-sized structures previously hidden in visual images. Called quasi-circular depressions (QCDs), these features likely represent derelict impact craters from the period of heavy bombardment that are now covered by a veneer of younger deposits. Crater-counting studies of QCDs suggest that the underlying surface in the northern hemisphere is at least as old as the oldest exposed crust in the southern highlands. The ancient age of the dichotomy places a significant constraint on theories of its origin. Straddling the dichotomy boundary in Mars's western hemisphere is a massive volcano-tectonic province known as the Tharsis region, or the Tharsis bulge. This immense, elevated structure is thousands of kilometers in diameter and covers up to 25% of the planet's surface. Averaging 7–10 km above datum (Martian "sea" level), Tharsis contains the highest elevations on the planet and the largest known volcanoes in the Solar System. Three enormous volcanoes—Ascraeus Mons, Pavonis Mons, and Arsia Mons (collectively the Tharsis Montes)—sit aligned NE–SW along the crest of the bulge. The vast Alba Mons (formerly Alba Patera) occupies the northern part of the region. The huge shield volcano Olympus Mons lies off the main bulge at the western edge of the province. The extreme massiveness of Tharsis has placed tremendous stress on the planet's lithosphere.
The geology of Mars is the scientific study of the planet’s surface, crust, and interior. It emphasizes the composition, structure, history, and physical processes that shape the planet and is analogous to terrestrial geology. In planetary science, the term geology is used in its broadest sense to refer to the study of the solid parts of planets and moons, incorporating aspects of geophysics, geochemistry, mineralogy, geodesy, and cartography. A neologism, areology (from the Greek Arēs, meaning Mars), sometimes appears as a synonym for Mars's geology in popular media and science fiction; the term is also used by the Areological Society. Mars is a terrestrial planet that has undergone planetary differentiation. The InSight lander, which touched down on twenty six November two thousand eighteen, deployed a sensitive seismometer to enable three-dimensional mapping of the deep interior. Using information from InSight, scientists reported on twenty five October two thousand twenty three that Mars has a radioactive magma ocean beneath its crust. Global physiography: Mars exhibits a number of distinct large-scale surface features that indicate the geological processes that have operated on the planet over time. This section introduces several of the larger physiographic regions of Mars. Together, these regions illustrate how geologic processes involving volcanism, tectonism, water, ice, and impacts have shaped the planet on a global scale. Hemispheric dichotomy The northern and southern hemispheres of Mars are strikingly different in topography and physiography. This dichotomy is a fundamental global geologic feature of the planet. The northern part is an enormous topographic depression: about one third of the surface (mostly in the northern hemisphere) lies three to six kilometers lower in elevation than the southern two thirds. This is a first order relief feature on par with the elevation difference between Earth's continents and ocean basins. The dichotomy is also expressed in two other ways: as a difference in impact crater density and in crustal thickness between the two hemispheres. The hemisphere south of the dichotomy boundary (often called the southern highlands or uplands) is very heavily cratered and ancient, characterized by rugged surfaces that date back to the period of heavy bombardment. In contrast, the lowlands north of the dichotomy boundary have few large craters, are very smooth and flat, and show features indicating extensive resurfacing since the southern highlands formed. Topographic and geophysical gravity data indicate that the crust in the southern highlands has a maximum thickness of about [value], whereas the crust in the northern lowlands peaks at around [value] in thickness. The location of the dichotomy boundary varies in latitude across Mars and depends on which of the three physical expressions of the dichotomy is being considered. The origin and age of the hemispheric dichotomy are still debated. Hypotheses generally fall into two categories: exogenic theories, which propose the dichotomy was produced by a mega-impact event or several large impacts early in the planet's history; and endogenic theories, which propose the dichotomy resulted from crustal thinning in the northern hemisphere due to mantle convection, overturning, or other chemical and thermal processes in the planet's interior. One endogenic model proposes an early episode of plate tectonics producing a thinner crust in the north, similar to spreading plate boundaries on Earth. Whatever its origin, the Martian dichotomy appears to be extremely old. A recent theory based on the Southern Polar Giant Impact and supported by the discovery of twelve hemispherical alignments suggests exogenic mechanisms are more likely than endogenic ones and that Mars never had plate tectonics capable of modifying the dichotomy. Laser altimeters and radar-sounding data from orbiting spacecraft have identified a large number of basin-sized structures previously hidden in visual images. Called quasi-circular depressions (QCDs), these features likely represent derelict impact craters from the period of heavy bombardment that are now covered by a veneer of younger deposits. Crater-counting studies of QCDs suggest that the underlying surface in the northern hemisphere is at least as old as the oldest exposed crust in the southern highlands. The ancient age of the dichotomy places a significant constraint on theories of its origin. Straddling the dichotomy boundary in Mars's western hemisphere is a massive volcano-tectonic province known as the Tharsis region, or the Tharsis bulge. This immense, elevated structure is thousands of kilometers in diameter and covers up to twenty five percent of the planet's surface. Averaging seven to ten kilometers above datum (Martian "sea" level), Tharsis contains the highest elevations on the planet and the largest known volcanoes in the Solar System. Three enormous volcanoes—Ascraeus Mons, Pavonis Mons, and Arsia Mons (collectively the Tharsis Montes)—sit aligned NE SW along the crest of the bulge. The vast Alba Mons (formerly Alba Patera) occupies the northern part of the region. The huge shield volcano Olympus Mons lies off the main bulge at the western edge of the province. The extreme massiveness of Tharsis has placed tremendous stress on the planet's lithosphere.
long_en_289
wiki_en
616
en
Aeroecology studies how airborne life forms use and interact with the biotic and abiotic components of the atmosphere. The aerosphere is treated as habitat, and how organisms respond to and exploit the dynamic aeroscape affects the ecology, evolution, and conservation of many birds, bats, insects, and plants. Interactions and properties of the aerosphere—the zone closest to Earth's surface—create selective pressures that shape organisms' size, morphology, and behavioral, sensory, metabolic, and respiratory functions. Unlike strictly terrestrial or aquatic organisms, aerial organisms are immediately affected by changing conditions such as wind, air density, oxygen concentration, precipitation, temperature, sunlight, polarized light, moonlight, and geomagnetic and gravitational forces. Traditionally, aeroecology has relied on field methods such as direct observation and detection techniques (e.g., moon-watching, thermal imaging, and bioacoustics). Recently, the field has advanced through remotely sensed data, especially Doppler weather radar (NEXRAD). In March 2012, an international, interdisciplinary Radar Aeroecology Workshop was held at the National Weather Center, University of Oklahoma, Norman, Oklahoma, USA. Experts in ecology and meteorology have discussed how various radar technologies can be applied to aeroecological questions. Aeroecology research groups at the University of Oklahoma and the University of Delaware continue to develop and integrate remotely sensed data to quantify, characterize, and track biological use of the lower aerosphere. History Aeroecology is a relatively new field. The concept was introduced by Boston University researcher Thomas Kunz and colleagues in a 2008 paper, "Aeroecology: Probing and Modeling the Aerosphere." Observational aeroecology Traditionally, aeroecology relied on ground-based observations of organisms occupying the airspace above, including near-surface foraging behavior and nocturnal passage migrants observed by human observers equipped with optics. The advent and adoption of technologies such as thermographic cameras, marine radar, and NEXRAD for aeroecological studies revolutionized the ability to detect and track sufficiently large animals in the aerosphere. Radar aeroecology Studies using weather radar were pioneered by Dr. Sidney A. Gauthreaux during his graduate work at Louisiana State University and later as a professor at Clemson University. His initial work with radar images produced by the WSR-57 network revealed much about the trans-Gulf of Mexico arrivals and departures of Neotropical migratory birds. Reflectivity: Radar beams reflect off sufficiently dense objects, such as water droplets, airplane fuselages, or flying animals. The reflectance of an object depends on its radar cross-section, which is dictated by the object's size, shape, and material composition. Weather radar reflectivity data represent the summed reflectivity of all objects within the sampled airspace and therefore provide a generalized measure of the amount of rain or, for aeroecological purposes, the abundance of animals in that volume of air. Aeroecologists use the term "bioscatter" to describe radar reflectance from biological objects. Relative velocity: Weather radars can detect Doppler shifts in returning waveforms. This information is used to estimate a mean relative velocity for all objects within the sampled airspace. Aeroecologists have used this information to distinguish objects drifting with the wind (particulates, e.g., dust, seeds, or pollen) from objects moving slightly faster or at an angle to the wind (e.g., insects), and from objects moving at least 5–6 m/s faster than, or against, the predominant wind direction (e.g., birds and bats). Dual-pol radar: An upgrade of weather radars to allow dual-polarization of the radar beam promises greater characterization and discrimination of airborne targets. For aeroecology, this should improve the ability to distinguish migrating birds from insects, weather, or suspended particulates. Ratios of horizontal to vertical beam reflectivity and Doppler shift hold promise for gauging discrepancies between bird orientation and their realized movement paths, providing a means to assess drift compensation among migratory birds. See also: Natural environment; Nature; Ecology.
Aeroecology studies how airborne life forms use and interact with the biotic and abiotic components of the atmosphere. The aerosphere is treated as habitat, and how organisms respond to and exploit the dynamic aeroscape affects the ecology, evolution, and conservation of many birds, bats, insects, and plants. Interactions and properties of the aerosphere—the zone closest to Earth's surface—create selective pressures that shape organisms' size, morphology, and behavioral, sensory, metabolic, and respiratory functions. Unlike strictly terrestrial or aquatic organisms, aerial organisms are immediately affected by changing conditions such as wind, air density, oxygen concentration, precipitation, temperature, sunlight, polarized light, moonlight, and geomagnetic and gravitational forces. Traditionally, aeroecology has relied on field methods such as direct observation and detection techniques (e.g., moon-watching, thermal imaging, and bioacoustics). Recently, the field has advanced through remotely sensed data, especially Doppler weather radar (NEXRAD). In March two thousand twelve, an international, interdisciplinary Radar Aeroecology Workshop was held at the National Weather Center, University of Oklahoma, Norman, Oklahoma, USA. Experts in ecology and meteorology have discussed how various radar technologies can be applied to aeroecological questions. Aeroecology research groups at the University of Oklahoma and the University of Delaware continue to develop and integrate remotely sensed data to quantify, characterize, and track biological use of the lower aerosphere. History Aeroecology is a relatively new field. The concept was introduced by Boston University researcher Thomas Kunz and colleagues in a two thousand eight paper, "Aeroecology: Probing and Modeling the Aerosphere." Observational aeroecology Traditionally, aeroecology relied on ground-based observations of organisms occupying the airspace above, including near-surface foraging behavior and nocturnal passage migrants observed by human observers equipped with optics. The advent and adoption of technologies such as thermographic cameras, marine radar, and NEXRAD for aeroecological studies revolutionized the ability to detect and track sufficiently large animals in the aerosphere. Radar aeroecology Studies using weather radar were pioneered by Dr. Sidney A. Gauthreaux during his graduate work at Louisiana State University and later as a professor at Clemson University. His initial work with radar images produced by the WSR-57 network revealed much about the trans-Gulf of Mexico arrivals and departures of Neotropical migratory birds. Reflectivity: Radar beams reflect off sufficiently dense objects, such as water droplets, airplane fuselages, or flying animals. The reflectance of an object depends on its radar cross section, which is dictated by the object's size, shape, and material composition. Weather radar reflectivity data represent the summed reflectivity of all objects within the sampled airspace and therefore provide a generalized measure of the amount of rain or, for aeroecological purposes, the abundance of animals in that volume of air. Aeroecologists use the term "bioscatter" to describe radar reflectance from biological objects. Relative velocity: Weather radars can detect Doppler shifts in returning waveforms. This information is used to estimate a mean relative velocity for all objects within the sampled airspace. Aeroecologists have used this information to distinguish objects drifting with the wind (particulates, e.g., dust, seeds, or pollen) from objects moving slightly faster or at an angle to the wind (e.g., insects), and from objects moving at least five to six meters per second faster than, or against, the predominant wind direction (e.g., birds and bats). Dual pol radar: An upgrade of weather radars to allow dual polarization of the radar beam promises greater characterization and discrimination of airborne targets. For aeroecology, this should improve the ability to distinguish migrating birds from insects, weather, or suspended particulates. Ratios of horizontal to vertical beam reflectivity and Doppler shift hold promise for gauging discrepancies between bird orientation and their realized movement paths, providing a means to assess drift compensation among migratory birds. See also: Natural environment; Nature; Ecology.
long_en_325
poet_en
680
en
Hex anxiously waited for the demon to show up that night. He didn’t have anything to protect himself—no crosses, nothing. He hadn’t believed in that kind of thing, but after getting magical powers all that changed. Demons were real, and he didn’t know what else might come through once the door had been opened. It was midnight, so he thought he’d be safe; Turbuk had said it would be fine after midnight. The thing was supposed to appear that night, but the day had passed—so it wouldn’t now, right? He went to sleep feeling relieved. The next thing he knew he was in a factory. He knew it was a dream—he’d had lucid dreams before. He walked through the factory and came upon a circular platform where someone who looked exactly like him was standing. “Hello?” Hex called. The doppelganger turned. It wore a mask that looked like a face without skin; red eyes stared through the openings. “I am here to torment you, human,” the doppelganger said. “I’m not exactly human—I'm an Angel of Acid,” Hex said proudly. “Really? I’ve never heard of a thing like that before.” The doppelganger took off his mask. Underneath, he had the same face as Hex. “What’s your name, demon? And why did you take my form?” Hex asked. “I’ve been watching you for a while,” the doppelganger replied. “You’re exactly the kind of faithless man I’m into tormenting. Also, I chose your form because it is the closest to perfection.” "You may call me... Pestilence Machine—Pestilence for short." Pestilence looked at his arms proudly. "Perfection? A short guy like me? You've got to be kidding." He scoffed. "Everyone has a different perception of perfection; I like being young-looking." Pestilence smiled. "Trust me, I'm a long way from perfect." He rubbed the back of his neck. "Aw, don't sell yourself short, hun. I may be here to torment you, but please don't do my job for me." Pestilence shook his head. "I could point out a million flaws, and that's just physically." He squinted; he didn't think highly of himself at all. The times he acted narcissistic were only to pretend he had some kind of ego—that's what no self-esteem got him. "Again, please don't do my job for me. Although it might be easy to reinforce some of your biases against yourself, I was surprised you were helping me instead of trying to kick me out. I'm a demon, you know. See?" Pestilence spread wings made from square hunks of metal with square holes at the ends that dripped blood. "As long as you aren't here to hurt my friends, I'm fine with it being just me." He shrugged. "Well, how about I have you all to myself? I could possess you and take your lover out of the picture. Then it could be just you and me." Pestilence put a finger to his bottom lip. "I won't let you hurt her." He hadn't had any qualms with the demon up until that point. "I'd like to see you stop me. I won't have you with anyone else, you see." Pestilence laughed. "I'd prefer that you look only at me. Even if you hate me, I enjoy it." "I'll do whatever it takes to get you out of me," he said. "I was fine with you before, but if you threaten Din, we are no longer on good terms." Pestilence frowned. "I'd like to see you try." He laughed and dissipated. He woke up very disturbed. He had to protect Din, but how? When would the demon strike? He wondered whether the demon had been flirting with him and concluded that it had, from the way it spoke. He had to figure out how to protect Din from himself without letting her know that a demon inside him was waiting to get out.
Hex anxiously waited for the demon to show up that night. He didn’t have anything to protect himself—no crosses, nothing. He hadn’t believed in that kind of thing, but after getting magical powers all that changed. Demons were real, and he didn’t know what else might come through once the door had been opened. It was midnight, so he thought he’d be safe; Turbuk had said it would be fine after midnight. The thing was supposed to appear that night, but the day had passed—so it wouldn’t now, right? He went to sleep feeling relieved. The next thing he knew he was in a factory. He knew it was a dream—he’d had lucid dreams before. He walked through the factory and came upon a circular platform where someone who looked exactly like him was standing. “Hello?” Hex called. The doppelganger turned. It wore a mask that looked like a face without skin; red eyes stared through the openings. “I am here to torment you, human,” the doppelganger said. “I’m not exactly human—I'm an Angel of Acid,” Hex said proudly. “Really? I’ve never heard of a thing like that before.” The doppelganger took off his mask. Underneath, he had the same face as Hex. “What’s your name, demon? And why did you take my form?” Hex asked. “I’ve been watching you for a while,” the doppelganger replied. “You’re exactly the kind of faithless man I’m into tormenting. Also, I chose your form because it is the closest to perfection.” "You may call me... Pestilence Machine—Pestilence for short." Pestilence looked at his arms proudly. "Perfection? A short guy like me? You've got to be kidding." He scoffed. "Everyone has a different perception of perfection; I like being young-looking." Pestilence smiled. "Trust me, I'm a long way from perfect." He rubbed the back of his neck. "Aw, don't sell yourself short, hun. I may be here to torment you, but please don't do my job for me." Pestilence shook his head. "I could point out a million flaws, and that's just physically." He squinted; he didn't think highly of himself at all. The times he acted narcissistic were only to pretend he had some kind of ego—that's what no self-esteem got him. "Again, please don't do my job for me. Although it might be easy to reinforce some of your biases against yourself, I was surprised you were helping me instead of trying to kick me out. I'm a demon, you know. See?" Pestilence spread wings made from square hunks of metal with square holes at the ends that dripped blood. "As long as you aren't here to hurt my friends, I'm fine with it being just me." He shrugged. "Well, how about I have you all to myself? I could possess you and take your lover out of the picture. Then it could be just you and me." Pestilence put a finger to his bottom lip. "I won't let you hurt her." He hadn't had any qualms with the demon up until that point. "I'd like to see you stop me. I won't have you with anyone else, you see." Pestilence laughed. "I'd prefer that you look only at me. Even if you hate me, I enjoy it." "I'll do whatever it takes to get you out of me," he said. "I was fine with you before, but if you threaten Din, we are no longer on good terms." Pestilence frowned. "I'd like to see you try." He laughed and dissipated. He woke up very disturbed. He had to protect Din, but how? When would the demon strike? He wondered whether the demon had been flirting with him and concluded that it had, from the way it spoke. He had to figure out how to protect Din from himself without letting her know that a demon inside him was waiting to get out.
long_en_246
news_en
886
en
A pale, thread-like arthropod found in a marble cavern called Lange Cave in Sequoia National Park, California, has been identified as a new species of millipede, Illacme tobini. The male specimen—about 20 mm (0.8 inches) long—has roughly 414 legs and four modified limbs (the ninth and tenth pairs) used to transfer sperm. Only this single male has been found, so females are unknown. The specimen was discovered in October 2006 by cave biologist Jean Krejca (now with Zara Environmental in Texas) during a survey of caves in Sequoia and Kings Canyon National Parks; major surveys ran from 2002 to 2004, with follow-up excursions from 2006 to 2009. Krejca sent the millipede to specialists Paul Marek (Virginia Tech) and William Shear (Hampden‑Sydney College) for analysis. Only one other species of Illacme has ever been discovered: Illacme plenipes, from San Benito County, California, about 150 miles (240 kilometers) from Lange Cave. With up to 750 legs, I. plenipes is the leggiest millipede on the planet. "I never would have expected that a second species of the leggiest animal on the planet would be discovered in a cave 150 miles away," Marek said in a statement. The researchers dubbed the new species Illacme tobini after Ben Tobin, a cave specialist at Grand Canyon National Park who organized the survey that uncovered the new millipede. After the discovery, researchers spent several years looking for more specimens, checking around Lange Cave and 63 other locations in the Sierra Nevada foothills. They turned over leaf litter, logs, and rocks, but found nothing. As a result, the new species is known only from the single male found in Lange Cave, a marble cavern in woodland habitat at the base of Yucca Mountain. The eyeless millipede may feed on fungus, the researchers wrote on Oct. 20 in the journal ZooKeys. Its ninth and tenth leg pairs have been converted into gonopods, the millipede version of penises. These specialized limbs are covered in spikes and shovel-like projections that shuttle sperm from male to female. The millipede also sports about 200 pores that secrete an unidentified substance—perhaps a chemical defense against predators. Researchers say it remains unclear whether Illacme tobini lives solely in caves or can also be found in typical millipede hideouts, such as the undersides of rocks. Illacme plenipes was discovered in 1928, and Illacme tobini is only the second Illacme species to be described, 90 years later. Both species belong to the secretive family Siphonorhinidae, which contains just 12 known species. Illacme species are the only North American representatives; other family members occur in Vietnam, South Africa, India, Indonesia, and Madagascar. The family likely reached these disparate regions when its ancestors spread across the ancient supercontinent Pangea and were separated when it broke apart about 200 million years ago, the researchers wrote. Alongside many spiders, pseudoscorpions, and flies, cave explorers found a tiny threadlike millipede in unexplored dark marble caves in Sequoia National Park. The specimen was sent to diplopodologists Bill Shear and Paul Marek, who recognized it as an evolutionary cousin of Illacme plenipes. The new species has "only" 414 legs compared with its relative's roughly 750, yet it shares bizarre anatomical features, including a body bearing about 200 poison glands, silk-secreting hairs, and four penises. The study was published in the open-access journal ZooKeys. The new millipede, named Illacme tobini after cave biologist Ben Tobin of the National Park Service, is described by its discoverer Jean Krejca (Zara Environmental LLC) and millipede taxonomists Paul Marek (Virginia Tech) and William A. Shear (Hampden–Sydney College). "I never would have expected that a second species of the leggiest animal on the planet would be discovered in a cave 150 miles away," says Paul Marek, an assistant professor in the Department of Entomology at Virginia Tech. Its closest relative lives under giant sandstone boulders outside San Juan Bautista, California. In addition to its extreme number of legs, the new millipede has bizarre-looking mouthparts of unknown function; four legs modified into gonopods (male reproductive appendages); a body covered in long, silk-secreting hairs; and paired nozzles on each of its more than 100 segments that eject an unknown defensive chemical. The authors note that exploring and documenting Earth's biodiversity can help prevent anonymous extinction, in which a species disappears before its ecological role, potential benefits to humanity, or intrinsic beauty are known. Marek PE, Krejca JK, Shear WA (2016) A new species of Illacme Cook & Loomis, 1928 from Sequoia National Park, California, with a world catalog of the Siphonorhinidae (Diplopoda, Siphonophorida). ZooKeys 626: 1–43. DOI: 10.3897/zookeys.626.9681 Additional information: Funding from a National Science Foundation Systematics and Biodiversity Sciences grant to J. Bond, P. Sierwald, W. Shear, P. Marek, and T. Jones (DEB-1256139), and from a Virginia Tech USDA NIFA Hatch Project (VA-160028). Millipedes, the little blunt-headed invertebrates with a habit of coiling up like baby ferns, rarely live up to their name. In Latin, millipede means "thousand feet," though the majority of these arthropods do not surpass the hundred-leg mark. That is not to say these critters want for feet—they have plenty. A newly described species, which was found living alone in a small California cave, almost does the millipede name justice.
A pale, thread-like arthropod found in a marble cavern called Lange Cave in Sequoia National Park, California, has been identified as a new species of millipede, Illacme tobini. The male specimen—about twenty millimeters (zero point eight inches) long—has roughly four hundred fourteen legs and four modified limbs (the ninth and tenth pairs) used to transfer sperm. Only this single male has been found, so females are unknown. The specimen was discovered in October two thousand six by cave biologist Jean Krejca (now with Zara Environmental in Texas) during a survey of caves in Sequoia and Kings Canyon National Parks; major surveys ran from two thousand two to two thousand four, with follow-up excursions from two thousand six to two thousand nine. Krejca sent the millipede to specialists Paul Marek (Virginia Tech) and William Shear (Hampden‑Sydney College) for analysis. Only one other species of Illacme has ever been discovered: Illacme plenipes, from San Benito County, California, about one hundred fifty miles (two hundred forty kilometers) from Lange Cave. With up to seven hundred fifty legs, I. plenipes is the leggiest millipede on the planet. "I never would have expected that a second species of the leggiest animal on the planet would be discovered in a cave one hundred fifty miles away," Marek said in a statement. The researchers dubbed the new species Illacme tobini after Ben Tobin, a cave specialist at Grand Canyon National Park who organized the survey that uncovered the new millipede. After the discovery, researchers spent several years looking for more specimens, checking around Lange Cave and sixty-three other locations in the Sierra Nevada foothills. They turned over leaf litter, logs, and rocks, but found nothing. As a result, the new species is known only from the single male found in Lange Cave, a marble cavern in woodland habitat at the base of Yucca Mountain. The eyeless millipede may feed on fungus, the researchers wrote on Oct. twenty in the journal ZooKeys. Its ninth and tenth leg pairs have been converted into gonopods, the millipede version of penises. These specialized limbs are covered in spikes and shovel-like projections that shuttle sperm from male to female. The millipede also sports about two hundred pores that secrete an unidentified substance—perhaps a chemical defense against predators. Researchers say it remains unclear whether Illacme tobini lives solely in caves or can also be found in typical millipede hideouts, such as the undersides of rocks. Illacme plenipes was discovered in nineteen twenty-eight, and Illacme tobini is only the second Illacme species to be described, ninety years later. Both species belong to the secretive family Siphonorhinidae, which contains just twelve known species. Illacme species are the only North American representatives; other family members occur in Vietnam, South Africa, India, Indonesia, and Madagascar. The family likely reached these disparate regions when its ancestors spread across the ancient supercontinent Pangea and were separated when it broke apart about two hundred million years ago, the researchers wrote. Alongside many spiders, pseudoscorpions, and flies, cave explorers found a tiny threadlike millipede in unexplored dark marble caves in Sequoia National Park. The specimen was sent to diplopodologists Bill Shear and Paul Marek, who recognized it as an evolutionary cousin of Illacme plenipes. The new species has "only" four hundred fourteen legs compared with its relative's roughly seven hundred fifty, yet it shares bizarre anatomical features, including a body bearing about two hundred poison glands, silk-secreting hairs, and four penises. The study was published in the open-access journal ZooKeys. The new millipede, named Illacme tobini after cave biologist Ben Tobin of the National Park Service, is described by its discoverer Jean Krejca (Zara Environmental LLC) and millipede taxonomists Paul Marek (Virginia Tech) and William A. Shear (Hampden–Sydney College). "I never would have expected that a second species of the leggiest animal on the planet would be discovered in a cave one hundred fifty miles away," says Paul Marek, an assistant professor in the Department of Entomology at Virginia Tech. Its closest relative lives under giant sandstone boulders outside San Juan Bautista, California. In addition to its extreme number of legs, the new millipede has bizarre-looking mouthparts of unknown function; four legs modified into gonopods (male reproductive appendages); a body covered in long, silk-secreting hairs; and paired nozzles on each of its more than one hundred segments that eject an unknown defensive chemical. The authors note that exploring and documenting Earth's biodiversity can help prevent anonymous extinction, in which a species disappears before its ecological role, potential benefits to humanity, or intrinsic beauty are known. Marek PE, Krejca JK, Shear WA (two thousand sixteen) A new species of Illacme Cook & Loomis, nineteen twenty eight from Sequoia National Park, California, with a world catalog of the Siphonorhinidae (Diplopoda, Siphonophorida). ZooKeys six hundred twenty six: one to forty three. DOI: ten point three eight nine seven slash zookeys dot six two six dot nine six eight one Additional information: Funding from a National Science Foundation Systematics and Biodiversity Sciences grant to J. Bond, P. Sierwald, W. Shear, P. Marek, and T. Jones (DEB one two five six one three nine), and from a Virginia Tech USDA NIFA Hatch Project (VA one six zero zero two eight). Millipedes, the little blunt-headed invertebrates with a habit of coiling up like baby ferns, rarely live up to their name. In Latin, millipede means "thousand feet," though the majority of these arthropods do not surpass the hundred-leg mark. That is not to say these critters want for feet—they have plenty. A newly described species, which was found living alone in a small California cave, almost does the millipede name justice.
long_en_210
news_en
781
en
LONDON, March 1 (Reuters) - Mixed factory activity data, a stronger dollar fueled by increased U.S. rate‑hike expectations, and weaker developed markets pushed emerging stocks lower on Thursday, while the South African rand hit a two-week low. Manufacturing data from across emerging markets remained in expansion territory. China’s Caixin survey showed growth picked up to a six-month high in February, but Russian factory activity rose at its slowest pace since July 2016 and Turkey’s eased. Central and Eastern Europe also experienced slower growth, with Hungary, Poland and the Czech Republic all seeing dips in factory activity; Poland’s PMI was at a four-month low. Liam Carson, emerging Europe economist at Capital Economics, said the falls in Central Europe appeared related to a drop in Germany’s index, the region’s key trading partner, which came in at a six-month low. "In both Poland and the Czech Republic, the new export orders components weakened markedly," he noted. MSCI’s benchmark emerging markets stocks index was trading slightly down at around two-week lows as losses in some markets offset gains in others. Chinese mainland stocks gained 0.6 percent on the Caixin data, but Polish shares fell 1.5 percent to seven-month lows and Turkish shares slipped 1 percent. The lacklustre performance also reflected falls in developed markets following Tuesday’s hawkish-sounding comments from new Federal Reserve chair Jerome Powell, which revived fears that the Fed will tighten more quickly than previously expected. The comments boosted the dollar to a six-week high against a basket of currencies as traders priced in at least four rate hikes by the Fed in 2018. This weighed on emerging currencies, with South Africa’s rand the biggest faller, down 1 percent to two-week lows. The rand has weakened for three successive sessions, but Simon Quijano-Evans, an emerging markets strategist at Legal & General Investment Management, said this should be viewed in the context of strong performance since the start of December. “The rand has rallied so much versus peers that there was bound to be some sort of correction,” he said. “Much depends on the dollar itself now, but the rand classically has a weaker defence mechanism in the form of foreign exchange reserves.” Expectations of major structural reforms under new President Cyril Ramaphosa to kickstart South Africa’s sluggish economy boosted the rand to three-year highs in February. South Africa’s PMI rose to its best reading in nearly a year in February on increased company optimism. The appointment of former finance minister Pravin Gordhan as public enterprises minister has also raised hopes of a clean-up at state-owned firms. Following a downgrade to its credit rating by S&P Global on Wednesday, South African state utility Eskom said it was working on a turnaround strategy and had signed a $2-billion short-term loan with a consortium of seven local and international banks. But Eskom’s dollar-denominated eurobonds continued to fall across the curve. Turkey’s lira fell 0.4 percent, Russia’s rouble 0.6 percent and China’s yuan 0.3 percent. The Polish zloty underperformed its emerging Europe peers, weakening 0.3 percent against the euro. The European Parliament is due to vote today on whether to support a proposal by the European Union’s executive to punish the Polish government for backsliding on democracy. Ukraine’s state-owned energy firm Naftogaz said it would claim damages from Russia’s Gazprom for its failure to deliver supplies of prepaid gas to Ukraine. A Stockholm arbitration tribunal found in Naftogaz’s favour on Wednesday in a long-running legal battle. For GRAPHIC on emerging market FX performance 2018, see tmsnrt.rs/2e7eoml. For GRAPHIC on MSCI emerging index performance 2018, see tmsnrt.rs/2dZbdP5. Emerging Markets Prices from Reuters — Equities Morgan Stanley Emrg Mkt Indx: 1194.32, Net -0.87, % Chg -0.07, YTD +3.10 Czech Rep: 1118.48, Net -1.99, % Chg -0.18, YTD +3.74 Poland: 2326.93, Net -37.39, % Chg -1.58, YTD -5.46 Hungary: 38266.88, Net +154.88, % Chg +0.41, YTD -2.82 Romania: 8423.82, Net -28.59, % Chg -0.34, YTD +8.64 Greece: 828.65, Net -7.01, % Chg -0.84, YTD +3.28 Russia: 1274.32, Net -11.15, % Chg -0.87, YTD +10.39 South Africa: 51284.43, Net -99.02, % Chg -0.19, YTD -2.38 Turkey: 17872.70, Net -1078.06, % Chg -0.91, YTD +2.20 China: 3273.76, Net +14.35, % Chg +0.44, YTD -1.01 India: 34038.88, Net -145.16, % Chg -0.42, YTD -0.05
LONDON, March one (Reuters) - Mixed factory activity data, a stronger dollar fueled by increased U.S. rate‑hike expectations, and weaker developed markets pushed emerging stocks lower on Thursday, while the South African rand hit a two-week low. Manufacturing data from across emerging markets remained in expansion territory. China’s Caixin survey showed growth picked up to a six-month high in February, but Russian factory activity rose at its slowest pace since July two thousand sixteen and Turkey’s eased. Central and Eastern Europe also experienced slower growth, with Hungary, Poland and the Czech Republic all seeing dips in factory activity; Poland’s PMI was at a four-month low. Liam Carson, emerging Europe economist at Capital Economics, said the falls in Central Europe appeared related to a drop in Germany’s index, the region’s key trading partner, which came in at a six-month low. "In both Poland and the Czech Republic, the new export orders components weakened markedly," he noted. MSCI’s benchmark emerging markets stocks index was trading slightly down at around two-week lows as losses in some markets offset gains in others. Chinese mainland stocks gained zero point six percent on the Caixin data, but Polish shares fell one point five percent to seven-month lows and Turkish shares slipped one percent. The lacklustre performance also reflected falls in developed markets following Tuesday’s hawkish-sounding comments from new Federal Reserve chair Jerome Powell, which revived fears that the Fed will tighten more quickly than previously expected. The comments boosted the dollar to a six-week high against a basket of currencies as traders priced in at least four rate hikes by the Fed in two thousand eighteen. This weighed on emerging currencies, with South Africa’s rand the biggest faller, down one percent to two-week lows. The rand has weakened for three successive sessions, but Simon Quijano-Evans, an emerging markets strategist at Legal & General Investment Management, said this should be viewed in the context of strong performance since the start of December. “The rand has rallied so much versus peers that there was bound to be some sort of correction,” he said. “Much depends on the dollar itself now, but the rand classically has a weaker defence mechanism in the form of foreign exchange reserves.” Expectations of major structural reforms under new President Cyril Ramaphosa to kickstart South Africa’s sluggish economy boosted the rand to three-year highs in February. South Africa’s PMI rose to its best reading in nearly a year in February on increased company optimism. The appointment of former finance minister Pravin Gordhan as public enterprises minister has also raised hopes of a clean-up at state-owned firms. Following a downgrade to its credit rating by S&P Global on Wednesday, South African state utility Eskom said it was working on a turnaround strategy and had signed a two billion dollar short-term loan with a consortium of seven local and international banks. But Eskom’s dollar-denominated eurobonds continued to fall across the curve. Turkey’s lira fell zero point four percent, Russia’s rouble zero point six percent and China’s yuan zero point three percent. The Polish zloty underperformed its emerging Europe peers, weakening zero point three percent against the euro. The European Parliament is due to vote today on whether to support a proposal by the European Union’s executive to punish the Polish government for backsliding on democracy. Ukraine’s state-owned energy firm Naftogaz said it would claim damages from Russia’s Gazprom for its failure to deliver supplies of prepaid gas to Ukraine. A Stockholm arbitration tribunal found in Naftogaz’s favour on Wednesday in a long-running legal battle. For GRAPHIC on emerging market FX performance two thousand eighteen, see tmsnrt dot rs slash two e seven e o m l. For GRAPHIC on MSCI emerging index performance two thousand eighteen, see tmsnrt dot rs slash two d Z b d P five. Emerging Markets Prices from Reuters — Equities Morgan Stanley Emrg Mkt Indx: one thousand one hundred ninety four point three two, Net minus zero point eight seven, percent Chg minus zero point zero seven, YTD plus three point one zero Czech Rep: one thousand one hundred eighteen point four eight, Net minus one point nine nine, percent Chg minus zero point one eight, YTD plus three point seven four Poland: two thousand three hundred twenty six point nine three, Net minus thirty seven point three nine, percent Chg minus one point five eight, YTD minus five point four six Hungary: thirty eight thousand two hundred sixty six point eight eight, Net plus one hundred fifty four point eight eight, percent Chg plus zero point four one, YTD minus two point eight two Romania: eight thousand four hundred twenty three point eight two, Net minus twenty eight point five nine, percent Chg minus zero point three four, YTD plus eight point six four Greece: eight hundred twenty eight point six five, Net minus seven point zero one, percent Chg minus zero point eight four, YTD plus three point two eight Russia: one thousand two hundred seventy four point three two, Net minus eleven point one five, percent Chg minus zero point eight seven, YTD plus ten point three nine South Africa: fifty one thousand two hundred eighty four point four three, Net minus ninety nine point zero two, percent Chg minus zero point one nine, YTD minus two point three eight Turkey: seventeen thousand eight hundred seventy two point seven zero, Net minus one thousand seventy eight point zero six, percent Chg minus zero point nine one, YTD plus two point two zero China: three thousand two hundred seventy three point seven six, Net plus fourteen point three five, percent Chg plus zero point four four, YTD minus one point zero one India: thirty four thousand thirty eight point eight eight, Net minus one hundred forty five point one six, percent Chg minus zero point four two, YTD minus zero point zero five.
long_en_231
news_en
932
en
DENVER (CBS4) — Human enterovirus 68 is responsible for dozens of children across Colorado ending up in intensive care units. It mimics the common cold, but within hours those affected can become severely ill. Parents say that within hours of developing cold symptoms, their children were gasping for air and placed on ventilators. Thirteen-year-old Will Cornejo said, "I remember thinking I was going to die. Yesterday I felt like I couldn't breathe at all." He remains on oxygen in intensive care, but his parents say he is improving. "He was white as a ghost, his lips were blue, he was completely unconscious at that point," said his mother, Jennifer Cornejo. "Sheer terror," said his father, Matt Cornejo. They called 911 and he had to be airlifted to the hospital. "At that point we weren't sure he was going to make it," Jennifer said. At Rocky Mountain Hospital for Children, Dr. Raju Meyappan said the virus had not been seen in Denver previously. He is concerned by how quickly it can become life-threatening, especially in children with mild asthma. "The onset of symptoms was very rapid, usually within hours," Meyappan said. Will is still weak, but his mother can tell he is getting better. DENVER — A rare virus has sickened dozens of Colorado children, filling intensive care units around the state, CBS Denver reports. Human enterovirus 68 has the same symptoms as the common cold, but within hours those affected can become severely ill; parents say children who initially seem to have a cold can be left gasping for air and placed on ventilators. Since EV‑D68 is a virus, antibiotics don’t work; doctors can only treat symptoms, helping kids breathe and keeping airways open. "I remember thinking I was going to die," said 13-year-old Will Cornejo. "Yesterday I felt like I couldn't breathe at all." Will is still on oxygen and in intensive care, but his parents say he's improving. "He was white as a ghost, his lips were blue, he was completely unconscious at that point," said his mother, Jennifer Cornejo. His father, Matt Cornejo, said they felt "sheer terror." "It was the scariest moment of my life because he was surrounded by five paramedics and a police officer," he said. "It's kind of shocking to be a few minutes away from possibly dying to being a normal eighth grader. We're very relieved." Will had to be airlifted to the hospital. "At that point we weren't sure he was going to make it," said Jennifer. In the ICU at Rocky Mountain Hospital for Children, Dr. Raju Meyappan said the virus hadn't shown up in Denver until now and he is seeing how quickly it becomes life-threatening, especially in children with mild asthma. "The onset of symptoms was very rapid, usually within hours," Meyappan said. Since human enterovirus 68 is a virus, antibiotics don't work; doctors can only treat the symptoms, helping kids breathe and trying to keep their airways open. Will is still weak but his mother can tell he is getting better. "He's texting, he's doing Instagram," said Jennifer Cornejo. "A lot of people were wondering if I died," said Will. "And I wanted to make sure that they didn't think that, so I started texting people telling them I wasn't." "It's kind of shocking to be a few minutes away from possibly dying to being a normal eighth-grader," said his father. "We're very relieved." Will Cornejo is recovering at Rocky Mountain Hospital for Children at Presbyterian/St. Luke's Medical Center; his parents, Jennifer and Matt of Lone Tree, were with him. Children's hospitals in Denver are experiencing an alarming spike in a severe respiratory illness—especially among very young children and those with asthma—that may be caused by an uncommon viral pathogen. Officials at Children's Hospital Colorado said they have treated more than 900 children since Aug. 18 for severe respiratory illness and admitted 86 to the hospital. "We've been seeing a very high volume in our ER, ICU and among hospitalized patients. The hospital is very, very full," said Dr. Christine Nyquist, a pediatric infectious disease physician. "Kids are getting the virus and having asthma complications." The suspected agent is human enterovirus 68, a rare virus associated with respiratory illness and related to rhinovirus, which causes the common cold, according to the Centers for Disease Control and Prevention. Samples sent to confirm whether it is enterovirus 68 have not yet produced a definitive answer, and similar outbreaks are being investigated in other cities, including confirmed cases in Kansas City, Mo. At Rocky Mountain Hospital for Children at Presbyterian/St. Luke's Medical Center in Denver, 13-year-old William Cornejo, a moderate asthmatic, was one of five children physicians put on ventilators this week after his mild cold developed overnight into a life-threatening illness. "Tuesday evening he had a little cold," said his mother, Jennifer. "He's a pretty moderate asthmatic." On Friday he was recovering at the hospital. "He'd never gone to the hospital for it since he was 2. I just never dreamed that we needed to go to the hospital," his mother said. After three treatments inhaling albuterol (the medicine used to prevent and treat wheezing, shortness of breath, coughing, and chest tightness), William wasn't any better, his mother said. She was trying to reach his doctor when she noticed he was unresponsive. "His lips were blue. He was white as a ghost," Cornejo said.
DENVER (CBS4) — Human enterovirus sixty-eight is responsible for dozens of children across Colorado ending up in intensive care units. It mimics the common cold, but within hours those affected can become severely ill. Parents say that within hours of developing cold symptoms, their children were gasping for air and placed on ventilators. Thirteen-year-old Will Cornejo said, "I remember thinking I was going to die. Yesterday I felt like I couldn't breathe at all." He remains on oxygen in intensive care, but his parents say he is improving. "He was white as a ghost, his lips were blue, he was completely unconscious at that point," said his mother, Jennifer Cornejo. "Sheer terror," said his father, Matt Cornejo. They called nine one one and he had to be airlifted to the hospital. "At that point we weren't sure he was going to make it," Jennifer said. At Rocky Mountain Hospital for Children, Dr. Raju Meyappan said the virus had not been seen in Denver previously. He is concerned by how quickly it can become life-threatening, especially in children with mild asthma. "The onset of symptoms was very rapid, usually within hours," Meyappan said. Will is still weak, but his mother can tell he is getting better. DENVER — A rare virus has sickened dozens of Colorado children, filling intensive care units around the state, CBS Denver reports. Human enterovirus sixty eight has the same symptoms as the common cold, but within hours those affected can become severely ill; parents say children who initially seem to have a cold can be left gasping for air and placed on ventilators. Since EV‑D sixty eight is a virus, antibiotics don’t work; doctors can only treat symptoms, helping kids breathe and keeping airways open. "I remember thinking I was going to die," said thirteen-year-old Will Cornejo. "Yesterday I felt like I couldn't breathe at all." Will is still on oxygen and in intensive care, but his parents say he's improving. "He was white as a ghost, his lips were blue, he was completely unconscious at that point," said his mother, Jennifer Cornejo. His father, Matt Cornejo, said they felt "sheer terror." "It's the scariest moment of my life because he was surrounded by five paramedics and a police officer," he said. "It's kind of shocking to be a few minutes away from possibly dying to being a normal eighth grader. We're very relieved." Will had to be airlifted to the hospital. "At that point we weren't sure he was going to make it," said Jennifer. In the ICU at Rocky Mountain Hospital for Children, Dr. Raju Meyappan said the virus hadn't shown up in Denver until now and he is seeing how quickly it becomes life-threatening, especially in children with mild asthma. "The onset of symptoms was very rapid, usually within hours," Meyappan said. Since human enterovirus sixty eight is a virus, antibiotics don't work; doctors can only treat the symptoms, helping kids breathe and trying to keep their airways open. Will is still weak but his mother can tell he is getting better. "He's texting, he's doing Instagram," said Jennifer Cornejo. "A lot of people were wondering if I died," said Will. "And I wanted to make sure that they didn't think that, so I started texting people telling them I wasn't." "It's kind of shocking to be a few minutes away from possibly dying to being a normal eighth-grader," said his father. "We're very relieved." Will Cornejo is recovering at Rocky Mountain Hospital for Children at Presbyterian/St. Luke's Medical Center; his parents, Jennifer and Matt of Lone Tree, were with him. Children's hospitals in Denver are experiencing an alarming spike in a severe respiratory illness—especially among very young children and those with asthma—that may be caused by an uncommon viral pathogen. Officials at Children's Hospital Colorado said they have treated more than nine hundred children since Aug. eighteen for severe respiratory illness and admitted eighty six to the hospital. "We've been seeing a very high volume in our ER, ICU and among hospitalized patients. The hospital is very, very full," said Dr. Christine Nyquist, a pediatric infectious disease physician. "Kids are getting the virus and having asthma complications." The suspected agent is human enterovirus sixty-eight, a rare virus associated with respiratory illness and related to rhinovirus, which causes the common cold, according to the Centers for Disease Control and Prevention. Samples sent to confirm whether it is enterovirus sixty-eight have not yet produced a definitive answer, and similar outbreaks are being investigated in other cities, including confirmed cases in Kansas City, Mo. At Rocky Mountain Hospital for Children at Presbyterian/St. Luke's Medical Center in Denver, thirteen-year-old William Cornejo, a moderate asthmatic, was one of five children physicians put on ventilators this week after his mild cold developed overnight into a life-threatening illness. "Tuesday evening he had a little cold," said his mother, Jennifer. "He's a pretty moderate asthmatic." On Friday he was recovering at the hospital. "He'd never gone to the hospital for it since he was two. I just never dreamed that we needed to go to the hospital," his mother said. After three treatments inhaling albuterol (the medicine used to prevent and treat wheezing, shortness of breath, coughing, and chest tightness), William wasn't any better, his mother said. She was trying to reach his doctor when she noticed he was unresponsive. "His lips were blue. He was white as a ghost," Cornejo said.
long_en_321
poet_en
755
en
"From what I can remember, Professor Frances specifically told us to make her a new body," Alina muttered behind Alex as she stared at the professor's head. "But from the way she talked about her plan, it seemed like she needed a special body created from special materials. If you're this confident, does that mean you already have a plan to obtain the materials for her new body?" "Yes. I was able to create my plan after you shared all of that Symbolist's memory with me," Alex replied, blinking. "Mind you, I learned a lot from that guy's memories." Alex heard some groans in the background, probably from his pun, but he ignored them as he continued to converse with Alina. "From the way you're smiling, you seem to have enjoyed peeking at that Symbolist's memories," Alina murmured, glaring at Alex. "Are you sure you're only thinking about the plan to recreate Professor Frances's body?" "Well, there's a lot of interesting stuff in that Symbolist's memories. I can't just ignore them, you know." Just like Alex said, he had obtained a lot of interesting information from the memories Alina had scoured from the Symbolist. If information could be compared to food, the information Alex obtained was like a succulent, honey-glazed roast chicken. Not only did he learn more about the Symbolists, he also observed the way of life of each of the Three Factions. For the Symbolists, what mattered most was knowledge and academic standing. The higher a Symbolist's academic status, the higher their rank among peers; positions of power were granted only to those with exemplary credentials. Those who lacked knowledge were regarded as plebeians or barbarians. At first, Alex found the Symbolists' view on knowledge extreme. But when he tried his Pillaged Symbolist ability, he discovered why they were so rigid. Their power—creating Symbols to manipulate the laws of physical reality—was a science in its own right. To discover new applications, Symbolists had to identify a problem, formulate a hypothesis, and conduct experiments to confirm it. Through this scientific process a Symbolist could grow stronger, and they adhered to it for years. It was no wonder they were obsessed with knowledge and academic achievement; only Symbolists with that mindset could devise innovations with their power. Like other power systems, the Symbolists had several levels of strength, with clear distinctions at each rank. The weakest was the Mortal Type Symbolist, followed by the King Type and the Emperor Type. An Emperor Type had to undergo a tribulation to advance to the Sovereign Type. Above Sovereign came the Earth Type, then the Heaven Type, followed by the Ancient Type, the Immortal Type, the Eternal Type, and finally the Paragon Type. The Grand Symbolist, the strongest in the world, was said to be a Paragon Type. When Alex saw this hierarchy, he sighed, feeling sorry for those so engrossed in study—their love lives were likely dismal. If the Symbolists valued knowledge, the Celestials valued bloodlines. Alex learned that a Celestial's power depended on their lineage. The Celestials' ability to harness Stellar Essence depends on the amount of Celestial Bloodline in their veins. The purer and stronger the bloodline, the greater the power. This natural limitation produced a society divided into three strata. The First Stratum contains those with the thinnest traces of Celestial Bloodline. They can barely use Stellar Essence but meet the minimum to be recognized as Celestials; they are called Mortal Celestials because their power is comparable to that of mortals. The Second Stratum includes Celestials with enough bloodline to make them formidable fighters, though not supremely powerful. These average-powered individuals are known as Heavenly Celestials, their strength bringing them closer to the heavens. The Third and highest Stratum comprises those with the thickest, purest Celestial Bloodline. Their lineage grants them legendary abilities, and they are treated as the ruling class among Celestials. These pure-blooded Celestials were called the Stellar Celestials, their presence compared to the stars themselves. As for their power levels, the Celestials surprisingly followed the same system as the Symbolists: Mortal, King, Emperor, Sovereign, Earth, Heaven, Ancient, Immortal, Eternal, and Paragon types. Because the memories they recovered came from a Symbolist, Alex could not determine how powerful the Celestials were as a whole. But since the Celestials could stand toe-to-toe with the Symbolists, Alex could safely conclude that the Celestial faction was likely just as strong as the Symbolist faction.
"From what I can remember, Professor Frances specifically told us to make her a new body," Alina muttered behind Alex as she stared at the professor's head. "But from the way she talked about her plan, it seemed like she needed a special body created from special materials. If you're this confident, does that mean you already have a plan to obtain the materials for her new body?" "Yes. I was able to create my plan after you shared all of that Symbolist's memory with me," Alex replied, blinking. "Mind you, I learned a lot from that guy's memories." Alex heard some groans in the background, probably from his pun, but he ignored them as he continued to converse with Alina. "From the way you're smiling, you seem to have enjoyed peeking at that Symbolist's memories," Alina murmured, glaring at Alex. "Are you sure you're only thinking about the plan to recreate Professor Frances's body?" "Well, there's a lot of interesting stuff in that Symbolist's memories. I can't just ignore them, you know." Just like Alex said, he had obtained a lot of interesting information from the memories Alina had scoured from the Symbolist. If information could be compared to food, the information Alex obtained was like a succulent, honey-glazed roast chicken. Not only did he learn more about the Symbolists, he also observed the way of life of each of the Three Factions. For the Symbolists, what mattered most was knowledge and academic standing. The higher a Symbolist's academic status, the higher their rank among peers; positions of power were granted only to those with exemplary credentials. Those who lacked knowledge were regarded as plebeians or barbarians. At first, Alex found the Symbolists' view on knowledge extreme. But when he tried his Pillaged Symbolist ability, he discovered why they were so rigid. Their power—creating Symbols to manipulate the laws of physical reality—was a science in its own right. To discover new applications, Symbolists had to identify a problem, formulate a hypothesis, and conduct experiments to confirm it. Through this scientific process a Symbolist could grow stronger, and they adhered to it for years. It was no wonder they were obsessed with knowledge and academic achievement; only Symbolists with that mindset could devise innovations with their power. Like other power systems, the Symbolists had several levels of strength, with clear distinctions at each rank. The weakest was the Mortal Type Symbolist, followed by the King Type and the Emperor Type. An Emperor Type had to undergo a tribulation to advance to the Sovereign Type. Above Sovereign came the Earth Type, then the Heaven Type, followed by the Ancient Type, the Immortal Type, the Eternal Type, and finally the Paragon Type. The Grand Symbolist, the strongest in the world, was said to be a Paragon Type. When Alex saw this hierarchy, he sighed, feeling sorry for those so engrossed in study—their love lives were likely dismal. If the Symbolists valued knowledge, the Celestials valued bloodlines. Alex learned that a Celestial's power depended on their lineage. The Celestials' ability to harness Stellar Essence depends on the amount of Celestial Bloodline in their veins. The purer and stronger the bloodline, the greater the power. This natural limitation produced a society divided into three strata. The First Stratum contains those with the thinnest traces of Celestial Bloodline. They can barely use Stellar Essence but meet the minimum to be recognized as Celestials; they are called Mortal Celestials because their power is comparable to that of mortals. The Second Stratum includes Celestials with enough bloodline to make them formidable fighters, though not supremely powerful. These average-powered individuals are known as Heavenly Celestials, their strength bringing them closer to the heavens. The Third and highest Stratum comprises those with the thickest, purest Celestial Bloodline. Their lineage grants them legendary abilities, and they are treated as the ruling class among Celestials. These pure-blooded Celestials were called the Stellar Celestials, their presence compared to the stars themselves. As for their power levels, the Celestials surprisingly followed the same system as the Symbolists: Mortal, King, Emperor, Sovereign, Earth, Heaven, Ancient, Immortal, Eternal, and Paragon types. Because the memories they recovered came from a Symbolist, Alex could not determine how powerful the Celestials were as a whole. But since the Celestials could stand toe-to-toe with the Symbolists, Alex could safely conclude that the Celestial faction was likely just as strong as the Symbolist faction.
long_en_285
wiki_en
628
en
The Department of Computer Science, University of Delhi, is part of the Faculty of Mathematical Sciences and was established in 1981. It began the three-year Master of Computer Applications (MCA) program in 1982, one of the first such programs in India, and started the M.Sc. in Computer Science in 2004. The department also pursues research in computer science and offers a Doctor of Philosophy (Ph.D.) program. The university conducts a postgraduate Diploma in Computer Applications (PGDCA) through its constituent colleges. Emphasis is placed not only on theoretical concepts but also on practical experience and industry interaction. MCA students, apart from classroom teaching, undertake case studies, presentations, and small projects. Examples of projects and assignments include: - Implementation of a Unix shell - Implementation of a chat server - Simulation of machine language code and implementation of an assembler - Simulation of the basic file system on Linux - Simulation of sliding window protocols (Go-Back-N and Selective Repeat) - Simulation of a two-pass assembler - Projects designed, documented, and coded using the software development life cycle (SDLC), e.g., a share tracker system - Computerized health-care system - Websites on tourism, online FIR, online bookstore, online examination, social networking, and online shipping management - Digital library system Research and implementation of cryptographic algorithms. Design and implementation of a new approach for searching in encrypted data using Bloom filters. Analysis and implementation of security algorithms in cloud computing. Malware and keylogger design. Software and hardware implementation of a smart home system. Detection and prevention of misuse and advanced spamming techniques. Design and security analysis of chaotic encryption. Analysis of risks, techniques, and corporate use of Web 2.0 technologies. Implementation of homomorphic encryption algorithms. Regional language encryption and translation. Implementation of elliptic curve cryptography. Design and implementation of self-synchronizing stream ciphers. M.Sc. Computer Science: As part of the curriculum, students give presentations, work on group projects, and complete programming assignments. The following are some of the projects and assignments undertaken by students: implementation of robot task assignment given available resources using MATLAB; JADE programming for agent communication; implementation of the DES encryption and decryption algorithm; application of a genetic algorithm to the 8-queens problem; implementation of K-means, FP-Tree, BIRCH, and DBSCAN algorithms using C++; generating all strong association rules from a set of given frequent itemsets of transactions; implementation of a DBMS; data preprocessing and KDD (Knowledge Discovery and Data Mining) using WEKA and C4.5. Implementation of clustering techniques on the output of the fuzzy C-means algorithm as initial input, using MATLAB. Simulation of a lexical analyzer and parser using C. Infrastructure The students of the department have access to two libraries. The Departmental Library is a reference library with over 4,000 titles in Computer Science, IT, and related areas such as Electronics and Mathematics. The Central Science Library, established in 1981, is one of the largest science libraries in India and holds 220,000 volumes of books and periodicals. Its website provides electronic subscriptions to 27,000 e-journals, including IEEE, ACM, and Springer journals and proceedings. Internet connection All labs, offices, and faculty rooms in the department are connected to the internet through the university intranet. Internet connectivity is provided via four switches, and a 24-port switch is used in the LAN to supply internet access to systems in the laboratories, classrooms, seminar room, and committee room. Delhi University Computer Centre Notable alumni: - Kiran Sethi — VP, Deutsche Bank, USA - Pradeep Mathur — VP, Capgemini, UK - Gulshan Kumar — Director, Alcatel‑Lucent, India - Ranjan Dhar — Director, Silicon Graphics, India - Manish Madan — VP, Perot Systems (TSI), India - Sachin Wadhwa — Head of Operations, Mastech InfoTrellis Inc., USA - Kumaran Sasikanthan — Country Head, AllSight Software, India References / External links: - Official website - Admissions Information - Delhi University - University departments in India
The Department of Computer Science, University of Delhi, is part of the Faculty of Mathematical Sciences and was established in nineteen eighty one. It began the three-year Master of Computer Applications (MCA) program in nineteen eighty two, one of the first such programs in India, and started the M.Sc. in Computer Science in two thousand four. The department also pursues research in computer science and offers a Doctor of Philosophy (Ph.D.) program. The university conducts a postgraduate Diploma in Computer Applications (PGDCA) through its constituent colleges. Emphasis is placed not only on theoretical concepts but also on practical experience and industry interaction. MCA students, apart from classroom teaching, undertake case studies, presentations, and small projects. Examples of projects and assignments include: - Implementation of a Unix shell - Implementation of a chat server - Simulation of machine language code and implementation of an assembler - Simulation of the basic file system on Linux - Simulation of sliding window protocols (Go-Back-N and Selective Repeat) - Simulation of a two-pass assembler - Projects designed, documented, and coded using the software development life cycle (SDLC), e.g., a share tracker system - Computerized health-care system - Websites on tourism, online FIR, online bookstore, online examination, social networking, and online shipping management - Digital library system Research and implementation of cryptographic algorithms. Design and implementation of a new approach for searching in encrypted data using Bloom filters. Analysis and implementation of security algorithms in cloud computing. Malware and keylogger design. Software and hardware implementation of a smart home system. Detection and prevention of misuse and advanced spamming techniques. Design and security analysis of chaotic encryption. Analysis of risks, techniques, and corporate use of Web two point zero technologies. Implementation of homomorphic encryption algorithms. Regional language encryption and translation. Implementation of elliptic curve cryptography. Design and implementation of self-synchronizing stream ciphers. M.Sc. Computer Science: As part of the curriculum, students give presentations, work on group projects, and complete programming assignments. The following are some of the projects and assignments undertaken by students: implementation of robot task assignment given available resources using MATLAB; JADE programming for agent communication; implementation of the DES encryption and decryption algorithm; application of a genetic algorithm to the eight-queens problem; implementation of K-means, FP-Tree, BIRCH, and DBSCAN algorithms using C plus plus; generating all strong association rules from a set of given frequent itemsets of transactions; implementation of a DBMS; data preprocessing and KDD (Knowledge Discovery and Data Mining) using WEKA and C four point five. Implementation of clustering techniques on the output of the fuzzy C means algorithm as initial input, using MATLAB. Simulation of a lexical analyzer and parser using C. Infrastructure The students of the department have access to two libraries. The Departmental Library is a reference library with over four thousand titles in Computer Science, IT, and related areas such as Electronics and Mathematics. The Central Science Library, established in nineteen eighty one, is one of the largest science libraries in India and holds two hundred twenty thousand volumes of books and periodicals. Its website provides electronic subscriptions to twenty seven thousand e-journals, including IEEE, ACM, and Springer journals and proceedings. Internet connection All labs, offices, and faculty rooms in the department are connected to the internet through the university intranet. Internet connectivity is provided via four switches, and a twenty four port switch is used in the LAN to supply internet access to systems in the laboratories, classrooms, seminar room, and committee room. Delhi University Computer Centre Notable alumni: - Kiran Sethi — VP, Deutsche Bank, USA - Pradeep Mathur — VP, Capgemini, UK - Gulshan Kumar — Director, Alcatel‑Lucent, India - Ranjan Dhar — Director, Silicon Graphics, India - Manish Madan — VP, Perot Systems (TSI), India - Sachin Wadhwa — Head of Operations, Mastech InfoTrellis Inc., USA - Kumaran Sasikanthan — Country Head, AllSight Software, India References / External links: - Official website - Admissions Information - Delhi University - University departments in India.
long_en_344
poet_en
723
en
This one was taken about a month and a half ago on our camping trip. And now... I just took this one (don't mind all the junk in the background!). I know that she looks a lot cuter when she's sporting the scruffy look, but that is much harder to maintain since she doesn't care to be brushed. So for us it is much easier to clip her. She is a pretty good sport about it. And here are some of the comments about my previous post... Jennakat is also known as Darly (she doesn't like the name Darly, but since that's what you all know her by, that's what I'll use). And yes, she's right. She did help me catch the dog before her bath. OK, Chick said... I don't have animals but I think giving them a bath would be my least favorite part. It seems like lots of hard work. I've always had animals and have given them baths. Dogs are the easiest if you have the right equipment. We have a utility sink with a hose attachment, so I'm not hurting my back by having her in the bathtub, and the hose just makes everything nice... I can focus the water right where I need it (much better than the old cup method). Bone said... Just put a bunch of toys in the bath. Oh, and tell her that the tub is the only place the monster who lives under the house and peeks through the window at night won't get her. That always worked on me. Well, Lilly seems too nervous about her bath to even bother with toys. I've never tried toys in the bath for any of my pets. But seeing as they seem in panic mode, I don't see toys helping. After putting it off until we couldn't stand her anymore, it was time to finally break down and groom the dog. She was getting to be quite a mess. Her hair was getting too long (although we keep it trimmed out of her eyes), and she was looking quite shaggy. She hates to be brushed, so she had some mats too. First I took her into the garage with my trusty scissors to trim her up. In the past I've used clippers on her, but her fur is really fine and it clogs up the clippers so we can't use the length guards. Without a guard on the clippers we end up chopping her at different lengths. Besides, my clippers are really loud and they scare her. This time I decided to use just the scissors, and I was quite pleased with how she came out. Yep, she's still cut at different lengths—I'm not perfect. But we were able to achieve the same results we normally get with less stress. I let her up to shake off the hair during the clipping and In fact, for the first time she tried to hide from me and even tried to get out our front window. She was a very good girl in the bath and is now all clean and smells like doggy shampoo instead of just like doggy. At what age are you supposed to start taking care of your parents? I thought I would at least get to wait until I was done with my kid. I've already told you about how my mom got very sick last year and almost died from pneumonia, right? As a refresher: my mom smokes and has osteoporosis and arthritis. Last year she got sick and went to her doctor, who couldn't figure out what was wrong. She wasn't showing any signs of her body fighting the infection. She ended up in the ER because she couldn't breathe, and the doctor there almost sent her home. Luckily, a lung specialist making rounds came in at my stepdad's request and saved my mom's life; she was admitted to the ICU. It turned out to be viral pneumonia and COPD. She now has only about 40% lung capacity and has to carry oxygen with her wherever she goes, although she doesn't have to wear the tube unless she feels short of breath.
This one was taken about a month and a half ago on our camping trip. And now... I just took this one (don't mind all the junk in the background!). I know that she looks a lot cuter when she's sporting the scruffy look, but that is much harder to maintain since she doesn't care to be brushed. So for us it is much easier to clip her. She is a pretty good sport about it. And here are some of the comments about my previous post... Jennakat is also known as Darly (she doesn't like the name Darly, but since that's what you all know her by, that's what I'll use). And yes, she's right. She did help me catch the dog before her bath. OK, Chick said... I don't have animals but I think giving them a bath would be my least favorite part. It seems like lots of hard work. I've always had animals and have given them baths. Dogs are the easiest if you have the right equipment. We have a utility sink with a hose attachment, so I'm not hurting my back by having her in the bathtub, and the hose just makes everything nice... I can focus the water right where I need it (much better than the old cup method). Bone said... Just put a bunch of toys in the bath. Oh, and tell her that the tub is the only place the monster who lives under the house and peeks through the window at night won't get her. That always worked on me. Well, Lilly seems too nervous about her bath to even bother with toys. I've never tried toys in the bath for any of my pets. But seeing as they seem in panic mode, I don't see toys helping. After putting it off until we couldn't stand her anymore, it was time to finally break down and groom the dog. She was getting to be quite a mess. Her hair was getting too long (although we keep it trimmed out of her eyes), and she was looking quite shaggy. She hates to be brushed, so she had some mats too. First I took her into the garage with my trusty scissors to trim her up. In the past I've used clippers on her, but her fur is really fine and it clogs up the clippers so we can't use the length guards. Without a guard on the clippers we end up chopping her at different lengths. Besides, my clippers are really loud and they scare her. This time I decided to use just the scissors, and I was quite pleased with how she came out. Yep, she's still cut at different lengths—I'm not perfect. But we were able to achieve the same results we normally get with less stress. I let her up to shake off the hair during the clipping and In fact, for the first time she tried to hide from me and even tried to get out our front window. She was a very good girl in the bath and is now all clean and smells like doggy shampoo instead of just like doggy. At what age are you supposed to start taking care of your parents? I thought I would at least get to wait until I was done with my kid. I've already told you about how my mom got very sick last year and almost died from pneumonia, right? As a refresher: my mom smokes and has osteoporosis and arthritis. Last year she got sick and went to her doctor, who couldn't figure out what was wrong. She wasn't showing any signs of her body fighting the infection. She ended up in the ER because she couldn't breathe, and the doctor there almost sent her home. Luckily, a lung specialist making rounds came in at my stepdad's request and saved my mom's life; she was admitted to the ICU. It turned out to be viral pneumonia and COPD. She now has only about forty percent lung capacity and has to carry oxygen with her wherever she goes, although she doesn't have to wear the tube unless she feels short of breath.
long_en_307
wiki_en
672
en
In mathematics, a topological space is countably compact if every countable open cover has a finite subcover. Equivalent definitions: A topological space X is countably compact if it satisfies any of the following equivalent conditions: (1) Every countable open cover of X has a finite subcover. (2) Every infinite subset A of X has an ω-accumulation point in X (a point x such that every neighborhood of x contains infinitely many points of A). (3) Every sequence in X has an accumulation point in X. (4) Every countable family of closed subsets of X with empty intersection has a finite subfamily with empty intersection. (1) ⇒ (2): Suppose (1) holds and let A be an infinite subset of X with no ω-accumulation point. By passing to a subset we may assume A is countable. For each x in X choose an open neighborhood U_x with U_x ∩ A finite (possibly empty), which exists because x is not an ω-accumulation point. For each finite subset F of A let V_F = {x ∈ X : U_x ∩ A = F}. Each V_F is open, the family {V_F} (indexed by the finite subsets F of A) covers X, and there are only countably many such F, so {V_F} is a countable open cover of X. But each V_F meets A in at most the finite set F, so no finite subfamily of the V_F can cover A (and hence cannot cover X), contradicting (1). Therefore (2) holds. (2) ⇒ (3): Suppose (2) holds and let (x_n) be a sequence in X. If some value x occurs infinitely many times in the sequence, then x is an accumulation point of the sequence. Otherwise the set of distinct values of the sequence is infinite, so by (2) it has an ω-accumulation point, which is an accumulation point of the sequence. That x is then an accumulation point of the sequence is easily checked. (3) ⇒ (1): Suppose (3) holds and {U_n} is a countable open cover without a finite subcover. Then for each n choose a point x_n not in U_1 ∪ … ∪ U_n. The sequence (x_n) has an accumulation point x, so x belongs to some U_k. But U_k is a neighborhood of x that contains no x_n with n ≥ k, so x is not an accumulation point of the sequence after all. This contradiction proves (1). (4) ⇔ (1): Conditions (1) and (4) are easily seen to be equivalent by taking complements. Examples: The first uncountable ordinal (with the order topology) is an example of a countably compact space that is not compact. Properties: Every compact space is countably compact. A countably compact space is compact if and only if it is Lindelöf. Every countably compact space is limit point compact. For T1 spaces, countable compactness and limit point compactness are equivalent. Every sequentially compact space is countably compact; the converse does not hold. For example, the product of continuum-many closed intervals with the product topology is compact and hence countably compact, but it is not sequentially compact. For first-countable spaces, countable compactness and sequential compactness are equivalent. For metrizable spaces, countable compactness, sequential compactness, limit point compactness, and compactness are all equivalent. The example of the set of all real numbers with the standard topology shows that neither local compactness, σ-compactness, nor paracompactness implies countable compactness. Closed subspaces of a countably compact space are countably compact, and the continuous image of a countably compact space is countably compact. Every countably compact space is pseudocompact. In a countably compact space, every locally finite family of nonempty subsets is finite. Every countably compact paracompact space is compact. Every countably compact Hausdorff first-countable space is regular. Every normal countably compact space is collectionwise normal. The product of a compact space and a countably compact space is countably compact, but the product of two countably compact spaces need not be countably compact. See also: sequentially compact space; compact space; limit point compact; Lindelöf space; compactness (mathematics).
In mathematics, a topological space is countably compact if every countable open cover has a finite subcover. Equivalent definitions: A topological space X is countably compact if it satisfies any of the following equivalent conditions: (one) Every countable open cover of X has a finite subcover. (two) Every infinite subset A of X has an omega-accumulation point in X (a point x such that every neighborhood of x contains infinitely many points of A). (three) Every sequence in X has an accumulation point in X. (four) Every countable family of closed subsets of X with empty intersection has a finite subfamily with empty intersection. (one) implies (two): Suppose (one) holds and let A be an infinite subset of X with no omega-accumulation point. By passing to a subset we may assume A is countable. For each x in X choose an open neighborhood U x with U x intersection A finite (possibly empty), which exists because x is not an omega-accumulation point. For each finite subset F of A let V F equals {x is in X : U x intersection A equals F}. Each V F is open, the family {V F} (indexed by the finite subsets F of A) covers X, and there are only countably many such F, so {V F} is a countable open cover of X. But each V F meets A in at most the finite set F, so no finite subfamily of the V F can cover A (and hence cannot cover X), contradicting (one). Therefore (two) holds. (two) implies (three): Suppose (two) holds and let (x n) be a sequence in X. If some value x occurs infinitely many times in the sequence, then x is an accumulation point of the sequence. Otherwise the set of distinct values of the sequence is infinite, so by (two) it has an omega-accumulation point, which is an accumulation point of the sequence. That x is then an accumulation point of the sequence is easily checked. (three) implies (one): Suppose (three) holds and {U n} is a countable open cover without a finite subcover. Then for each n choose a point x n not in U one union dot dot dot union U n. The sequence (x n) has an accumulation point x, so x belongs to some U k. But U k is a neighborhood of x that contains no x n with n is greater than or equal to k, so x is not an accumulation point of the sequence after all. This contradiction proves (one). (four) if and only if (one): Conditions (one) and (four) are easily seen to be equivalent by taking complements. Examples: The first uncountable ordinal (with the order topology) is an example of a countably compact space that is not compact. Properties: Every compact space is countably compact. A countably compact space is compact if and only if it is Lindelöf. Every countably compact space is limit point compact. For T one spaces, countable compactness and limit point compactness are equivalent. Every sequentially compact space is countably compact; the converse does not hold. For example, the product of continuum-many closed intervals with the product topology is compact and hence countably compact, but it is not sequentially compact. For first-countable spaces, countable compactness and sequential compactness are equivalent. For metrizable spaces, countable compactness, sequential compactness, limit point compactness, and compactness are all equivalent. The example of the set of all real numbers with the standard topology shows that neither local compactness, sigma-compactness, nor paracompactness implies countable compactness. Closed subspaces of a countably compact space are countably compact, and the continuous image of a countably compact space is countably compact. Every countably compact space is pseudocompact. In a countably compact space, every locally finite family of nonempty subsets is finite. Every countably compact paracompact space is compact. Every countably compact Hausdorff first-countable space is regular. Every normal countably compact space is collectionwise normal. The product of a compact space and a countably compact space is countably compact, but the product of two countably compact spaces need not be countably compact. See also: sequentially compact space; compact space; limit point compact; Lindelöf space; compactness (mathematics).
long_en_293
wiki_en
794
en
IBM Quantum Platform (previously known as IBM Quantum Experience) is an online platform that allows public and premium access to cloud-based quantum computing services provided by IBM. It includes access to IBM’s prototype quantum processors, tutorials on quantum computation, and an interactive textbook. As of February 2021, there are over 20 devices on the service, six of which are freely available to the public. The service can be used to run algorithms and experiments and to explore tutorials and simulations demonstrating possibilities with quantum computing. IBM’s quantum processors use superconducting transmon qubits housed in dilution refrigerators at IBM Research’s Thomas J. Watson Research Center. Users interact with a quantum processor through the quantum circuit model of computation. Circuits can be created either graphically with the Quantum Composer or programmatically within the Jupyter notebooks of the Quantum Lab. Circuits are developed using Qiskit and can be compiled to OpenQASM for execution on real quantum systems. The service was launched in May 2016 as the IBM Quantum Experience with a five-qubit processor and a matching simulator connected in a star-shaped pattern. At that time, users could only interact with the hardware through the Quantum Composer GUI, and quantum circuits were limited to the specific two-qubit gates available on the hardware. In July 2016, IBM launched the IBM Quantum Experience community forum, which was subsequently replaced by a Slack workspace. In January 2017, IBM expanded the IBM Quantum Experience by increasing the set of two-qubit interactions available on the five-qubit processor, extending the simulator to support custom topologies up to twenty qubits, and allowing users to interact with the device and simulator using quantum assembly (QASM) code. In March 2017, IBM released Qiskit to enable users to write code and run experiments on the processor and simulator more easily; a beginner's user guide was also added. In May 2017, IBM made a 16-qubit processor available on the IBM Quantum service. In January 2018, IBM launched a quantum awards program hosted on the IBM Quantum Experience. In May 2019, the service was overhauled, adding web-hosted Jupyter notebooks and integrating the online interactive Qiskit Textbook. After a redesign in March 2021, IBM made a greater distinction between the Composer GUI and the Jupyter notebooks, retiring the IBM Quantum Experience name in favor of the separate names IBM Quantum Composer and IBM Quantum Lab; the offering is now collectively called the IBM Quantum Platform. The IBM Quantum Composer is a graphical user interface (GUI) that allows users to construct quantum algorithms and run experiments. Users can run their quantum algorithms on a real quantum processor or use a simulator to see the results. Algorithms developed in the Quantum Composer are referred to as a "quantum score", because the composer resembles a musical score. The composer can also be used in scripting mode, where the user writes programs in the OpenQASM language. Below is an example of a very small program built for IBM's 5-qubit computer. The program prepares a 3-qubit GHZ state — a three-qubit analogue of the Bell state — and then measures the qubits. The measurement collapses the GHZ state to one of two outcomes: |000> or |111>. include "qelib1.inc" qreg q[5]; // allocate 5 qubits (initialized to |00000>) creg c[5]; // allocate 5 classical bits h q[0]; // Hadamard on qubit 0 cx q[0], q[1]; // CNOT with control q0 and target q1; creates a Bell state between q0 and q1 // (|00> + |11>)/sqrt(2) on qubits 0 and 1 cx q[1], q[2]; // expand entanglement to qubit 2, producing a 3-qubit GHZ state measure q[0] -> c[0]; // measuring q0 collapses the entire 3-qubit state measure q[1] -> c[1]; // q1 and q2 will read the same value as q0 measure q[2] -> c[2]; Every instruction in the OpenQASM language is either the application of a quantum gate, initialization of the chip's registers to zero, or the measurement of those registers. In 2018, IBM reported that there were over 80,000 users of the IBM Quantum Experience, who collectively ran more than 3 million experiments. Many academic papers have been published by researchers who used the service, and university professors have incorporated IBM Quantum examples and experiments into their curricula. Dr. Christine Corbett Moran, a postdoctoral fellow at the California Institute of Technology, used the service while conducting research in Antarctica; Tara Tosic, a physics student at the École Polytechnique Fédérale de Lausanne (EPFL), used it while researching in the Arctic. People have also used IBM Quantum for various non-academic purposes, including the development of games such as "quantum battleships." External links: IBM Quantum Platform; IBM Quantum Experience; quantum computing; quantum programming.
IBM Quantum Platform (previously known as IBM Quantum Experience) is an online platform that allows public and premium access to cloud-based quantum computing services provided by IBM. It includes access to IBM’s prototype quantum processors, tutorials on quantum computation, and an interactive textbook. As of February two thousand twenty-one, there are over twenty devices on the service, six of which are freely available to the public. The service can be used to run algorithms and experiments and to explore tutorials and simulations demonstrating possibilities with quantum computing. IBM’s quantum processors use superconducting transmon qubits housed in dilution refrigerators at IBM Research’s Thomas J. Watson Research Center. Users interact with a quantum processor through the quantum circuit model of computation. Circuits can be created either graphically with the Quantum Composer or programmatically within the Jupyter notebooks of the Quantum Lab. Circuits are developed using Qiskit and can be compiled to OpenQASM for execution on real quantum systems. The service was launched in May two thousand sixteen as the IBM Quantum Experience with a five-qubit processor and a matching simulator connected in a star-shaped pattern. At that time, users could only interact with the hardware through the Quantum Composer GUI, and quantum circuits were limited to the specific two-qubit gates available on the hardware. In July two thousand sixteen, IBM launched the IBM Quantum Experience community forum, which was subsequently replaced by a Slack workspace. In January two thousand seventeen, IBM expanded the IBM Quantum Experience by increasing the set of two-qubit interactions available on the five-qubit processor, extending the simulator to support custom topologies up to twenty qubits, and allowing users to interact with the device and simulator using quantum assembly (QASM) code. In March two thousand seventeen, IBM released Qiskit to enable users to write code and run experiments on the processor and simulator more easily; a beginner's user guide was also added. In May two thousand seventeen, IBM made a sixteen-qubit processor available on the IBM Quantum service. In January two thousand eighteen, IBM launched a quantum awards program hosted on the IBM Quantum Experience. In May two thousand nineteen, the service was overhauled, adding web-hosted Jupyter notebooks and integrating the online interactive Qiskit Textbook. After a redesign in March two thousand twenty-one, IBM made a greater distinction between the Composer GUI and the Jupyter notebooks, retiring the IBM Quantum Experience name in favor of the separate names IBM Quantum Composer and IBM Quantum Lab; the offering is now collectively called the IBM Quantum Platform. The IBM Quantum Composer is a graphical user interface (GUI) that allows users to construct quantum algorithms and run experiments. Users can run their quantum algorithms on a real quantum processor or use a simulator to see the results. Algorithms developed in the Quantum Composer are referred to as a "quantum score", because the composer resembles a musical score. The composer can also be used in scripting mode, where the user writes programs in the OpenQASM language. Below is an example of a very small program built for IBM's five-qubit computer. The program prepares a three-qubit GHZ state — a three-qubit analogue of the Bell state — and then measures the qubits. The measurement collapses the GHZ state to one of two outcomes: |zero zero zero> or |one one one>. include "qelib one dot inc" qreg q five; // allocate five qubits (initialized to |zero zero zero zero zero>) creg c five; // allocate five classical bits h q zero; // Hadamard on qubit zero cx q zero, q one; // CNOT with control q zero and target q one; creates a Bell state between q zero and q one // (|zero zero> + |one one>)/sqrt(two) on qubits zero and one cx q one, q two; // expand entanglement to qubit two, producing a three-qubit GHZ state measure q zero -> c zero; // measuring q zero collapses the entire three-qubit state measure q one -> c one; // q one and q two will read the same value as q zero measure q two -> c two; Every instruction in the OpenQASM language is either the application of a quantum gate, initialization of the chip's registers to zero, or the measurement of those registers. In two thousand eighteen, IBM reported that there were over eighty thousand users of the IBM Quantum Experience, who collectively ran more than three million experiments. Many academic papers have been published by researchers who used the service, and university professors have incorporated IBM Quantum examples and experiments into their curricula. Dr. Christine Corbett Moran, a postdoctoral fellow at the California Institute of Technology, used the service while conducting research in Antarctica; Tara Tosic, a physics student at the École Polytechnique Fédérale de Lausanne (EPFL), used it while researching in the Arctic. People have also used IBM Quantum for various non-academic purposes, including the development of games such as "quantum battleships." External links: IBM Quantum Platform; IBM Quantum Experience; quantum computing; quantum programming.
long_en_280
wiki_en
799
en
In physics, a quantum amplifier uses quantum-mechanical processes to amplify a signal; examples include the active elements of lasers and optical amplifiers. Key properties are the amplification coefficient and the uncertainty (noise), which are not independent: higher gain entails higher noise. For lasers, this uncertainty corresponds to amplified spontaneous emission from the active medium. The unavoidable noise of quantum amplifiers, which follows from quantum mechanics, is one reason digital signals are used in optical communications. An amplifier increases the amplitude of whatever passes through it. Classical amplifiers handle classical signals, while quantum amplifiers handle quantum signals such as coherent states. The output is not necessarily a coherent state—its form typically depends on the amplifier's design. In addition to amplifying intensity, quantum amplifiers can increase the quantum noise present in the signal. The physical electric field in a paraxial single‑mode pulse can be approximated as a superposition of modes. The electric field of a single mode can be written in terms of the spatial coordinate r (with z denoting the direction of propagation), the polarization vector e, the longitudinal wave number kz, and the annihilation operator a for that mode. Noise analysis is performed with respect to the mean value of the annihilation operator; to obtain the noise one solves for the real and imaginary parts of the field's projection onto the given mode. Spatial coordinates do not appear in the solution. Assume that the mean value of the initial field is ⟨a⟩. Physically, the initial state corresponds to the coherent pulse at the input of the optical amplifier and the final state to the amplified output pulse. Only the quantum state of the corresponding mode matters, so the pulse may be treated as a single‑mode field. A quantum amplifier is a unitary transformation that maps the initial state to the amplified state. The amplification depends on the mean value ⟨a⟩ of the field operator and its variance. A coherent state has minimal uncertainty; under the amplifier transformation the uncertainty may increase. This increase can be interpreted as added noise in the amplifier. The gain may be defined in several equivalent ways; in the Heisenberg representation the field operator evolves while the state vector remains unchanged, so amplification is attributed to the evolution of the operator. In general the gain can be complex and may depend on the initial state. For laser applications the amplification of coherent states is most important, so one usually assumes an initial coherent state characterized by a complex amplitude. Even with this restriction the gain may depend on the amplitude or phase of the initial field. We use the Heisenberg representation and evaluate all expectation values with respect to the initial coherent state. The relevant quantity characterizes the increase of the field uncertainty due to amplification. Because the intrinsic uncertainty of the field operator does not depend on the coherent-state parameter, this quantity indicates how much the output field deviates from a coherent state. Linear phase-invariant amplifiers may be described as follows. Assume a unitary operator amplifies the field so that the input and output operators are related by a linear relation with c-number coefficients and an amplifier creation operator. Without loss of generality the coefficients may be taken as real. The commutator of the field operators is invariant under the unitary transformation. From unitarity it follows that the amplifier mode satisfies the canonical Bose commutation relations, which imposes constraints on the c-number coefficients. Hence a phase-invariant amplifier acts by introducing an additional bosonic mode that stores energy. The gain and noise of this amplifier can be calculated; the coefficient is sometimes called the intensity amplification coefficient, and the amplifier necessarily adds a minimum amount of noise. A useful property of the linear amplifier is that if several modes are amplified by the same factor, the noise in each mode is determined independently. To obtain large gain with minimal added noise one can use homodyne detection to prepare a field state with known amplitude and phase corresponding to a linear phase-invariant amplifier. The uncertainty principle sets a lower bound on quantum noise in an amplifier. In particular, the outputs of laser systems and optical generators are not coherent states. Nonlinear amplifiers do not have a linear relation between input and output, and their minimum noise cannot be much smaller than that of an idealized linear amplifier. This limit is determined by the derivatives of the mapping function: larger derivatives imply greater uncertainty. Examples include most lasers, which can act as near-linear amplifiers when operating close to threshold and thus show large uncertainty and nonlinear behavior. As with linear amplifiers, they may preserve phase and keep uncertainty low, but there are exceptions, such as parametric oscillators, which amplify while shifting the input phase.
In physics, a quantum-amplifier uses quantum-mechanical processes to amplify a signal; examples include the active elements of lasers and optical amplifiers. Key properties are the amplification coefficient and the uncertainty (noise), which are not independent: higher gain entails higher noise. For lasers, this uncertainty corresponds to amplified spontaneous emission from the active medium. The unavoidable noise of quantum amplifiers, which follows from quantum mechanics, is one reason digital signals are used in optical communications. An amplifier increases the amplitude of whatever passes through it. Classical amplifiers handle classical signals, while quantum amplifiers handle quantum signals such as coherent states. The output is not necessarily a coherent state—its form typically depends on the amplifier's design. In addition to amplifying intensity, quantum amplifiers can increase the quantum noise present in the signal. The physical electric field in a paraxial single‑mode pulse can be approximated as a superposition of modes. The electric field of a single mode can be written in terms of the spatial coordinate r (with z denoting the direction of propagation), the polarization vector e, the longitudinal wave number k z, and the annihilation operator a for that mode. Noise analysis is performed with respect to the mean value of the annihilation operator; to obtain the noise one solves for the real and imaginary parts of the field's projection onto the given mode. Spatial coordinates do not appear in the solution. Assume that the mean value of the initial field is angle bracket a angle bracket. Physically, the initial state corresponds to the coherent pulse at the input of the optical amplifier and the final state to the amplified output pulse. Only the quantum state of the corresponding mode matters, so the pulse may be treated as a single‑mode field. A quantum amplifier is a unitary transformation that maps the initial state to the amplified state. The amplification depends on the mean value angle bracket a angle bracket of the field operator and its variance. A coherent state has minimal uncertainty; under the amplifier transformation the uncertainty may increase. This increase can be interpreted as added noise in the amplifier. The gain may be defined in several equivalent ways; in the Heisenberg representation the field operator evolves while the state vector remains unchanged, so amplification is attributed to the evolution of the operator. In general the gain can be complex and may depend on the initial state. For laser applications the amplification of coherent states is most important, so one usually assumes an initial coherent state characterized by a complex amplitude. Even with this restriction the gain may depend on the amplitude or phase of the initial field. We use the Heisenberg representation and evaluate all expectation values with respect to the initial coherent state. The relevant quantity characterizes the increase of the field uncertainty due to amplification. Because the intrinsic uncertainty of the field operator does not depend on the coherent-state parameter, this quantity indicates how much the output field deviates from a coherent state. Linear phase-invariant amplifiers may be described as follows. Assume a unitary operator amplifies the field so that the input and output operators are related by a linear relation with c number coefficients and an amplifier creation operator. Without loss of generality the coefficients may be taken as real. The commutator of the field operators is invariant under the unitary transformation. From unitarity it follows that the amplifier mode satisfies the canonical Bose commutation relations, which imposes constraints on the c number coefficients. Hence a phase-invariant amplifier acts by introducing an additional bosonic mode that stores energy. The gain and noise of this amplifier can be calculated; the coefficient is sometimes called the intensity amplification coefficient, and the amplifier necessarily adds a minimum amount of noise. A useful property of the linear amplifier is that if several modes are amplified by the same factor, the noise in each mode is determined independently. To obtain large gain with minimal added noise one can use homodyne detection to prepare a field state with known amplitude and phase corresponding to a linear phase-invariant amplifier. The uncertainty principle sets a lower bound on quantum noise in an amplifier. In particular, the outputs of laser systems and optical generators are not coherent states. Nonlinear amplifiers do not have a linear relation between input and output, and their minimum noise cannot be much smaller than that of an idealized linear amplifier. This limit is determined by the derivatives of the mapping function: larger derivatives imply greater uncertainty. Examples include most lasers, which can act as near-linear amplifiers when operating close to threshold and thus show large uncertainty and nonlinear behavior. As with linear amplifiers, they may preserve phase and keep uncertainty low, but there are exceptions, such as parametric oscillators, which amplify while shifting the input phase.
long_en_282
wiki_en
806
en
The Institute of Physics and Engineering in Medicine (IPEM) is the United Kingdom's professional body and learned society for physicists, engineers, and technologists in medicine. It was founded in 1995 and changed its name from the Institution of Physics and Engineering in Medicine and Biology (IPEMB) in 1997. The Institute is governed by an elected Board of Trustees, which oversees the Science, Research and Innovation Council and the Professional and Standards Council; these councils have operational responsibility for the Institute's scientific and professional activities. A substructure of committees, groups, and member panels carries out the Institute's work. IPEM is licensed by the Engineering Council to register Chartered Engineers, Incorporated Engineers, and Engineering Technologists, and by the Science Council to register Chartered Scientists, Registered Scientists, and Registered Science Technicians. Its charitable objects and articles of association state that the Institute's aim is to promote, for the public benefit, the advancement of physics and engineering applied to medicine and biology and to advance public education in the field. History The organization traces its origins to three societies: the Hospital Physicists Association (HPA), founded in 1943; the Hospital Physics Technicians Association (HPTA), founded in 1952; and the Biological Engineering Society (BES), founded in 1960. The HPA created its scientific arm, the Institute of Physical Sciences in Medicine (IPSM), in 1984. The trade union and scientific activities split in 1989: the scientific arm merged with the BES to form IPEMB, while the trade union (HPA) joined the Manufacturing, Science and Finance Trades Union (MSF). The Association of Medical Technologists (AMT), formerly HPTA, merged with IPEM in 2001. Membership There are several categories of membership: - Fellowship (FIPEM): the most senior category, awarded only to Full Members (MIPEM) who have made an outstanding contribution to medical physics or engineering. - Full Membership (MIPEM): for people seeking recognition as professional scientists, engineers, or technologists in the field of medical physics or engineering; includes the former Incorporated Membership. - Associate Membership: for people working in the relevant area, including STP trainees. Associate Membership: Postgraduate students and apprentices in an appropriate field are eligible for free Associate Membership. Professional Affiliate Membership: For professionals who apply physics and engineering to medicine but who are not clinical scientists or clinical/biomedical engineers (e.g., doctors, radiographers, nurses, vets, dentists). Affiliate Membership: For anyone with an interest in medical physics and engineering. Full-time undergraduate students are eligible for free Affiliate Membership. Honorary Fellowship: Awarded for outstanding contributions in the field of physics or engineering applied to medicine or related biological science. Company Membership: Dual membership with the Institute of Physics allows a 25% discount on membership subscriptions payable to each organisation for those who are, or become, individual members of both organisations. Equality, diversity and inclusion: The Institute is a signatory of the Engineering Diversity Concordat of the Royal Academy of Engineering and the Science Council Diversity Declaration and has its own Equality Policy. Annual Conference and Woolmer Lecture: The Institute holds an annual conference on medical physics and engineering. During this conference the flagship lecture of the Institute, the Woolmer Lecture, is presented. The lecture is dedicated to Professor Ronald Woolmer, who was the first Director of the Research Department of Anaesthetics at the Royal College of Surgeons. Woolmer convened a meeting at the Royal College of Surgeons, London, to discuss the evolving field of engineering applied to medicine. It was agreed that the group should hold regular meetings, and as a result the Biological Engineering Society (BES) was formed, with Ronald Woolmer as the first President. Woolmer died two years after the formation of the BES, and it was agreed that a memorial lecture would be sponsored in recognition of his achievements. The following table includes a list of the lectures since 2002. Publications: IPEM owns three international peer-reviewed journals: Physics in Medicine and Biology (PMB), Medical Engineering and Physics, and Physiological Measurement. PMB and Physiological Measurement are published in association with IOP Publishing, while Medical Engineering and Physics is published by Elsevier. The Institute also publishes SCOPE, the Institute's quarterly magazine which is free to members and non-members; a report series; educational and teaching material; and a comprehensive e-book programme jointly with IOP Publishing. President of IPEM: The IPEM president serves for two years and takes office at the Annual Conference. The following table includes a list of all past presidents of IPEMB/IPEM. References External links - Official site of the IPEM: https://www.ipem.ac.uk - E-SCOPE — Online archive of the IPEM magazine: https://www.ipem.ac.uk/ScientificJournalsPublications/SCOPE/E-SCOPE.aspx Categories - ECUK Licensed Members - Educational institutions established in 1995 - Engineering societies based in the United Kingdom - Medical physics organisations - Organisations based in York - 1995 establishments in the United Kingdom - Scientific organisations established in 1995 - Medical and health organisations based in the United Kingdom
The Institute of Physics and Engineering in Medicine (IPEM) is the United Kingdom's professional body and learned society for physicists, engineers, and technologists in medicine. It was founded in nineteen ninety-five and changed its name from the Institution of Physics and Engineering in Medicine and Biology (IPEMB) in nineteen ninety-seven. The Institute is governed by an elected Board of Trustees, which oversees the Science, Research and Innovation Council and the Professional and Standards Council; these councils have operational responsibility for the Institute's scientific and professional activities. A substructure of committees, groups, and member panels carries out the Institute's work. IPEM is licensed by the Engineering Council to register Chartered Engineers, Incorporated Engineers, and Engineering Technologists, and by the Science Council to register Chartered Scientists, Registered Scientists, and Registered Science Technicians. Its charitable objects and articles of association state that the Institute's aim is to promote, for the public benefit, the advancement of physics and engineering applied to medicine and biology and to advance public education in the field. History The organization traces its origins to three societies: the Hospital Physicists Association (HPA), founded in nineteen forty-three; the Hospital Physics Technicians Association (HPTA), founded in nineteen fifty-two; and the Biological Engineering Society (BES), founded in nineteen sixty. The HPA created its scientific arm, the Institute of Physical Sciences in Medicine (IPSM), in nineteen eighty four. The trade union and scientific activities split in nineteen eighty nine: the scientific arm merged with the BES to form IPEMB, while the trade union (HPA) joined the Manufacturing, Science and Finance Trades Union (MSF). The Association of Medical Technologists (AMT), formerly HPTA, merged with IPEM in two thousand one. Membership There are several categories of membership: - Fellowship (FIPEM): the most senior category, awarded only to Full Members (MIPEM) who have made an outstanding contribution to medical physics or engineering. - Full Membership (MIPEM): for people seeking recognition as professional scientists, engineers, or technologists in the field of medical physics or engineering; includes the former Incorporated Membership. - Associate Membership: for people working in the relevant area, including STP trainees. Associate Membership: Postgraduate students and apprentices in an appropriate field are eligible for free Associate Membership. Professional Affiliate Membership: For professionals who apply physics and engineering to medicine but who are not clinical scientists or clinical/biomedical engineers (e.g., doctors, radiographers, nurses, vets, dentists). Affiliate Membership: For anyone with an interest in medical physics and engineering. Full-time undergraduate students are eligible for free Affiliate Membership. Honorary Fellowship: Awarded for outstanding contributions in the field of physics or engineering applied to medicine or related biological science. Company Membership: Dual membership with the Institute of Physics allows a twenty five percent discount on membership subscriptions payable to each organisation for those who are, or become, individual members of both organisations. Equality, diversity and inclusion: The Institute is a signatory of the Engineering Diversity Concordat of the Royal Academy of Engineering and the Science Council Diversity Declaration and has its own Equality Policy. Annual Conference and Woolmer Lecture: The Institute holds an annual conference on medical physics and engineering. During this conference the flagship lecture of the Institute, the Woolmer Lecture, is presented. The lecture is dedicated to Professor Ronald Woolmer, who was the first Director of the Research Department of Anaesthetics at the Royal College of Surgeons. Woolmer convened a meeting at the Royal College of Surgeons, London, to discuss the evolving field of engineering applied to medicine. It was agreed that the group should hold regular meetings, and as a result the Biological Engineering Society (BES) was formed, with Ronald Woolmer as the first President. Woolmer died two years after the formation of the BES, and it was agreed that a memorial lecture would be sponsored in recognition of his achievements. The following table includes a list of the lectures since two thousand two. Publications: IPEM owns three international peer-reviewed journals: Physics in Medicine and Biology (PMB), Medical Engineering and Physics, and Physiological Measurement. PMB and Physiological Measurement are published in association with IOP Publishing, while Medical Engineering and Physics is published by Elsevier. The Institute also publishes SCOPE, the Institute's quarterly magazine which is free to members and non-members; a report series; educational and teaching material; and a comprehensive e-book programme jointly with IOP Publishing. President of IPEM: The IPEM president serves for two years and takes office at the Annual Conference. The following table includes a list of all past presidents of IPEMB/IPEM. References External links - Official site of the IPEM: https colon slash slash www dot ipem dot ac dot uk - E-SCOPE — Online archive of the IPEM magazine: https colon slash slash www dot ipem dot ac dot uk slash ScientificJournalsPublications slash SCOPE slash E hyphen SCOPE dot aspx Categories - ECUK Licensed Members - Educational institutions established in one thousand nine hundred ninety-five - Engineering societies based in the United Kingdom - Medical physics organisations - Organisations based in York - one thousand nine hundred ninety-five establishments in the United Kingdom - Scientific organisations established in one thousand nine hundred ninety-five - Medical and health organisations based in the United Kingdom.
long_en_305
wiki_en
882
en
A trusted execution environment (TEE) is a secure area of the main processor that protects code and data in terms of confidentiality and integrity. Data integrity prevents unauthorized entities outside the TEE from altering data, while code integrity prevents TEE code from being modified or replaced by unauthorized parties, which in some DRM schemes could include the device owner. This protection is provided by hardware-based mechanisms such as Intel Software Guard Extensions (Intel SGX), which offers memory encryption and isolates specific application code and data. Intel SGX allows user-level applications to allocate private memory regions called enclaves, which are protected even from higher-privileged processes. As an isolated execution environment, a TEE provides isolated execution, integrity of applications running inside it, and confidentiality of their assets. Generally, a TEE offers stronger security for trusted applications than a rich operating system and more functionality than a secure element. History: The Open Mobile Terminal Platform (OMTP) first defined TEE in their "Advanced Trusted Environment: OMTP TR1" standard, defining it as "a set of hardware and software components providing facilities necessary to support applications," which had to meet the requirements of one of two defined security levels. Profile 1 targeted only software attacks, while Profile 2 targeted both software and hardware attacks. Commercial TEE solutions based on ARM TrustZone technology and conforming to the TR1 standard were later launched, such as Trusted Foundations, developed by Trusted Logic. Work on the OMTP standards ended in mid-2010 when the group transitioned into the Wholesale Applications Community (WAC). The OMTP standards, including those defining a TEE, are hosted by the GSMA. Details: The TEE typically consists of a hardware isolation mechanism plus a secure operating system running on top of that isolation mechanism. However, the term has been used more generally to mean a protected solution. While a GlobalPlatform TEE requires hardware isolation, others such as EMVCo use the term TEE to refer to both hardware/software and software-only solutions. FIDO uses the TEE concept for restricted operating environments based on hardware isolation. Only trusted applications running in a Trusted Execution Environment (TEE) have access to the full capabilities of a device's main processor, peripherals, and memory; hardware isolation protects these resources from user-installed apps running in the main operating system. Software and cryptographic isolation inside the TEE also protect trusted applications from one another. Service providers, mobile network operators (MNOs), operating-system developers, application developers, device manufacturers, platform providers, and silicon vendors are the primary stakeholders in TEE standardization. To prevent simulation of hardware by user-controlled software, a hardware root of trust is used: a set of private keys embedded into the chip during manufacturing. One-time programmable memory such as eFuses is typically used on mobile devices; these keys cannot be changed even after device resets. The public counterparts of those keys are stored in a manufacturer database, along with a non-secret hash of the trusted party's public key (usually the chip vendor), which is used to sign trusted firmware and the circuits that perform cryptographic operations and control access. The hardware is designed to prevent any software not signed with the trusted party's key from accessing privileged features. At runtime, the vendor's public key is provided and hashed; the result is compared to the hash embedded in the chip. If the hash matches, the public key is used to verify a digital signature on trusted, vendor-controlled firmware (for example, the chain of bootloaders on Android devices or architectural enclaves in Intel SGX). That trusted firmware implements remote attestation. During attestation, an application's untrusted component loads its trusted component into memory, and hardware prevents untrusted components from modifying the trusted component. The untrusted party requests a nonce from the verifier's server; the nonce is used in a cryptographic authentication protocol to prove the integrity of the trusted application. The proof is returned to the verifier, which checks it. A valid proof cannot be produced on simulated hardware (e.g., QEMU) because constructing it requires access to keys embedded in the device; only the trusted firmware can access those keys or keys derived from them. Because only the platform owner is intended to access the data recorded by the manufacturer, the verifier must interact with a service set up by the vendor. If the scheme is implemented improperly, the chip vendor could track which applications run on which chip and selectively deny service by falsely reporting that authentication failed. To simulate hardware so it passes remote authentication, an attacker must extract keys from the device, which is costly because of the required equipment and technical skill. Techniques such as focused ion beams, scanning electron microscopy, microprobing, and chip decapsulation are difficult—or impossible—if the hardware is designed so reverse engineering destroys the keys. In most cases keys are unique to each device, so a key extracted from one chip cannot be used on others (for example, with physically unclonable functions). Although loss of ownership is not inherent to TEEs—it is possible to design a system that lets the first user retain control by burning a hash of their key into e-fuses—in practice consumer electronics are typically designed to allow chip manufacturers to control access to attestation and its algorithms.
A trusted execution environment (TEE) is a secure area of the main processor that protects code and data in terms of confidentiality and integrity. Data integrity prevents unauthorized entities outside the TEE from altering data, while code integrity prevents TEE code from being modified or replaced by unauthorized parties, which in some DRM schemes could include the device owner. This protection is provided by hardware-based mechanisms such as Intel Software Guard Extensions (Intel SGX), which offers memory encryption and isolates specific application code and data. Intel SGX allows user-level applications to allocate private memory regions called enclaves, which are protected even from higher-privileged processes. As an isolated execution environment, a TEE provides isolated execution, integrity of applications running inside it, and confidentiality of their assets. Generally, a TEE offers stronger security for trusted applications than a rich operating system and more functionality than a secure element. History: The Open Mobile Terminal Platform (OMTP) first defined TEE in their "Advanced Trusted Environment: OMTP TR one" standard, defining it as "a set of hardware and software components providing facilities necessary to support applications," which had to meet the requirements of one of two defined security levels. Profile one targeted only software attacks, while Profile two targeted both software and hardware attacks. Commercial TEE solutions based on ARM TrustZone technology and conforming to the TR one standard were later launched, such as Trusted Foundations, developed by Trusted Logic. Work on the OMTP standards ended in mid two thousand ten when the group transitioned into the Wholesale Applications Community (WAC). The OMTP standards, including those defining a TEE, are hosted by the GSMA. Details: The TEE typically consists of a hardware isolation mechanism plus a secure operating system running on top of that isolation mechanism. However, the term has been used more generally to mean a protected solution. While a GlobalPlatform TEE requires hardware isolation, others such as EMVCo use the term TEE to refer to both hardware/software and software-only solutions. FIDO uses the TEE concept for restricted operating environments based on hardware isolation. Only trusted applications running in a Trusted Execution Environment (TEE) have access to the full capabilities of a device's main processor, peripherals, and memory; hardware isolation protects these resources from user-installed apps running in the main operating system. Software and cryptographic isolation inside the TEE also protect trusted applications from one another. Service providers, mobile network operators (MNOs), operating-system developers, application developers, device manufacturers, platform providers, and silicon vendors are the primary stakeholders in TEE standardization. To prevent simulation of hardware by user-controlled software, a hardware root of trust is used: a set of private keys embedded into the chip during manufacturing. One-time programmable memory such as eFuses is typically used on mobile devices; these keys cannot be changed even after device resets. The public counterparts of those keys are stored in a manufacturer database, along with a non-secret hash of the trusted party's public key (usually the chip vendor), which is used to sign trusted firmware and the circuits that perform cryptographic operations and control access. The hardware is designed to prevent any software not signed with the trusted party's key from accessing privileged features. At runtime, the vendor's public key is provided and hashed; the result is compared to the hash embedded in the chip. If the hash matches, the public key is used to verify a digital signature on trusted, vendor-controlled firmware (for example, the chain of bootloaders on Android devices or architectural enclaves in Intel SGX). That trusted firmware implements remote attestation. During attestation, an application's untrusted component loads its trusted component into memory, and hardware prevents untrusted components from modifying the trusted component. The untrusted party requests a nonce from the verifier's server; the nonce is used in a cryptographic authentication protocol to prove the integrity of the trusted application. The proof is returned to the verifier, which checks it. A valid proof cannot be produced on simulated hardware (e.g., QEMU) because constructing it requires access to keys embedded in the device; only the trusted firmware can access those keys or keys derived from them. Because only the platform owner is intended to access the data recorded by the manufacturer, the verifier must interact with a service set up by the vendor. If the scheme is implemented improperly, the chip vendor could track which applications run on which chip and selectively deny service by falsely reporting that authentication failed. To simulate hardware so it passes remote authentication, an attacker must extract keys from the device, which is costly because of the required equipment and technical skill. Techniques such as focused ion beams, scanning electron microscopy, microprobing, and chip decapsulation are difficult—or impossible—if the hardware is designed so reverse engineering destroys the keys. In most cases keys are unique to each device, so a key extracted from one chip cannot be used on others (for example, with physically unclonable functions). Although loss of ownership is not inherent to TEEs—it is possible to design a system that lets the first user retain control by burning a hash of their key into e-fuses—in practice consumer electronics are typically designed to allow chip manufacturers to control access to attestation and its algorithms.
long_en_334
poet_en
739
en
Beyond the back of Rico's villa, which sat on a mountainside, a path wound through dense forest to the rarely explored wilderness. In the upper-right distance stood Mt. Conquest, a desolate peak shaped like a Roman nose. Determined to gain experience before his journey, Rico set out to run some tests. Dressed in running gear, he stepped out the back door and was immediately surrounded by thick undergrowth. After a short walk he came to a small wooden shelter that served as a rain cover and a resting place. After a quick break he continued up the slope. As he trudged upward, he fixated on a massive stone slab ahead. With a thought the slab vanished, and Rico grinned as he saw an identical stone resting on a nearby plot of land in another dimension. Knowing objects from the outside world could be transferred to his disc world without issue, he exhaled deeply. He called this place the Cradle, where he would explore and practice until he could create a true world. The Cradle was a kilometre-long stretch of flat land, encircled by a white ash-grey atmosphere that served as a barrier, protecting the world from chaos and keeping any harmful radiation from affecting living creatures. There was no life in the Cradle—only dirt, stones, and water. Using his authority, Rico could observe anything within his universe for as long as he wished. The blazing orb in the sky was the sun, and the world itself appeared to be a small flat disc floating in space. To support life he had built a three-kilometer-thick atmosphere. With a thought, the stone from the Cradle returned to the outside world. Before, Rico had no aspirations, interests, or goals, but now that he had the disc world he had specific aims, one of which was to build a vivid realm drawn from myths and legends. The Cradle lacked an entire suite of biological chains; these biological links would supplement the myriad laws Rico had imposed on it. After a life was extinguished, the Cradle would take its energy to grow the world, boost its vitality, and produce various materials. The idea was partly inspired by visions Enoch had given him and by articles he'd read on the internet. Although it was the simplest world he had ever attempted to construct, it was also rudimentary. The Book of Enoch, which he had written using the power of Origin, contained all the incomprehensible things the white orbs had implanted in his mind, and despite trying to read much of it over the previous weeks, he still didn't understand even one percent. Rico cocked his head and scanned the area, noting the profusion of weeds, shrubs, and other plants. "This should be alright," he thought. The Life of the Cradle would start among the weeds and insects on the ground. Rico's physical condition was poor; his long, narrow arms could barely pull up a few weeds. If he simply tore them out, the plants would be destroyed and their deep roots left behind. He pulled at the weeds until he had to rest. After using his power to place them, a stand of plants soon appeared in the Cradle beside a small running stream. As he glanced a few meters away, a boulder, the surrounding soil, and a host of insects and microorganisms rose into the air and entered the Cradle. In the Cradle, the weeds he had planted had already grown into a patch of grass and tall shrubs with clumps three to four meters long. He wondered if it was because time in the Cradle flowed differently. Because of its small size, Rico could manipulate time; in a tiny world, time could pass hundreds of thousands of times faster. At least two years had gone by in a few minutes. The plants were swarming with ants whose shells gleamed oddly metallic; Rico was astonished. It showed that Earth's organisms could not only survive but also mutate. Looking around, he spotted mosquitoes buzzing over a nearby pond. "I should investigate this small food chain," Rico pondered. "What do ants eat? What do mosquitoes feed on? Ants are insects that will eat almost anything. Mosquitoes feed on blood—could they take blood from ants? And how could a mosquito pierce an ant that's two centimeters long? Truly fascinating." Rico beamed a nasty grin.
Beyond the back of Rico's villa, which sat on a mountainside, a path wound through dense forest to the rarely explored wilderness. In the upper-right distance stood Mt. Conquest, a desolate peak shaped like a Roman nose. Determined to gain experience before his journey, Rico set out to run some tests. Dressed in running gear, he stepped out the back door and was immediately surrounded by thick undergrowth. After a short walk he came to a small wooden shelter that served as a rain cover and a resting place. After a quick break he continued up the slope. As he trudged upward, he fixated on a massive stone slab ahead. With a thought the slab vanished, and Rico grinned as he saw an identical stone resting on a nearby plot of land in another dimension. Knowing objects from the outside world could be transferred to his disc world without issue, he exhaled deeply. He called this place the Cradle, where he would explore and practice until he could create a true world. The Cradle was a kilometre-long stretch of flat land, encircled by a white ash-grey atmosphere that served as a barrier, protecting the world from chaos and keeping any harmful radiation from affecting living creatures. There was no life in the Cradle—only dirt, stones, and water. Using his authority, Rico could observe anything within his universe for as long as he wished. The blazing orb in the sky was the sun, and the world itself appeared to be a small flat disc floating in space. To support life he had built a three-kilometer-thick atmosphere. With a thought, the stone from the Cradle returned to the outside world. Before, Rico had no aspirations, interests, or goals, but now that he had the disc world he had specific aims, one of which was to build a vivid realm drawn from myths and legends. The Cradle lacked an entire suite of biological chains; these biological links would supplement the myriad laws Rico had imposed on it. After a life was extinguished, the Cradle would take its energy to grow the world, boost its vitality, and produce various materials. The idea was partly inspired by visions Enoch had given him and by articles he'd read on the internet. Although it was the simplest world he had ever attempted to construct, it was also rudimentary. The Book of Enoch, which he had written using the power of Origin, contained all the incomprehensible things the white orbs had implanted in his mind, and despite trying to read much of it over the previous weeks, he still didn't understand even one percent. Rico cocked his head and scanned the area, noting the profusion of weeds, shrubs, and other plants. "This should be alright," he thought. The Life of the Cradle would start among the weeds and insects on the ground. Rico's physical condition was poor; his long, narrow arms could barely pull up a few weeds. If he simply tore them out, the plants would be destroyed and their deep roots left behind. He pulled at the weeds until he had to rest. After using his power to place them, a stand of plants soon appeared in the Cradle beside a small running stream. As he glanced a few meters away, a boulder, the surrounding soil, and a host of insects and microorganisms rose into the air and entered the Cradle. In the Cradle, the weeds he had planted had already grown into a patch of grass and tall shrubs with clumps three to four meters long. He wondered if it was because time in the Cradle flowed differently. Because of its small size, Rico could manipulate time; in a tiny world, time could pass hundreds of thousands of times faster. At least two years had gone by in a few minutes. The plants were swarming with ants whose shells gleamed oddly metallic; Rico was astonished. It showed that Earth's organisms could not only survive but also mutate. Looking around, he spotted mosquitoes buzzing over a nearby pond. "I should investigate this small food chain," Rico pondered. "What do ants eat? What do mosquitoes feed on? Ants are insects that will eat almost anything. Mosquitoes feed on blood—could they take blood from ants? And how could a mosquito pierce an ant that's two centimeters long? Truly fascinating." Rico beamed a nasty grin.
long_en_268
wiki_en
765
en
Pro Evolution Soccer 2018 (abbreviated PES 2018) is a sports video game developed and published by Konami for Microsoft Windows, PlayStation 3, PlayStation 4, Xbox 360, Xbox One, Android, and iOS. The game is the 17th installment in the Pro Evolution Soccer series and was released worldwide in September 2017. It was the final PES game released for the PlayStation 3 and Xbox 360 consoles and the last to feature the UEFA Champions League, UEFA Europa League, and UEFA Super Cup licenses, as well as the Borussia Dortmund partnership. A mobile version, PES 2018 Mobile, was released for iOS and Android and had exceeded 150 million downloads as of August 2018. PES 2018 was succeeded by Pro Evolution Soccer 2019. Konami kept the theme of the previous release and announced a special "Barcelona Edition" along with pre-order bonus content for both digital and physical versions. Barcelona, Atlético Madrid, Borussia Dortmund, and Liverpool were confirmed as licensed at E3 2017. FC Schalke 04, Valencia, Fulham, and the Brazil national football team were also licensed, and the France national football team license was confirmed in the online beta. Konami released a demo on August 30, 2017, for PlayStation 3, PlayStation 4, Xbox 360, and Xbox One; the demo included limited stadiums, clubs, and features. Konami did not develop a Nintendo Switch version but has said it is open to porting future games in the series. The first data pack, Data Pack 1, launched on October 5, 2017, and featured 117 player face updates, ten new boots, updated backboards in Master League for Barcelona, and over 3,000 new player thumbnails. Data Pack 2.0 was released on November 15, 2017; it added Arsenal's Emirates Stadium, the Estadio Nacional in Santiago, Chile, new boots, and new player faces, but also removed licenses for Avaí, Fluminense, São Paulo, and Vasco da Gama players, replacing them with generic players. The trailer for the pack was released on the same day. When PES 2012 was released, it introduced new ball intelligence, improved positioning into space, and wingers attempting overlaps. In 2013, G-cluster announced that PES had introduced innovative improvements to player facial identification, and patch notes and updates were provided to assist new players. Pro Evolution Soccer 2018 received generally favorable reviews, according to Metacritic; IGN awarded it a 9.2 out of 10, calling it "Amazing" and saying, "Once again, PES has set an incredibly high level of quality for other sports games to try and match." GameSpot called it "the most satisfying football game ever made" and said its excellent on-pitch gameplay gives it an edge over FIFA 18. Commercial performance: Pro Evolution Soccer 2018 sold 64,342 copies on PlayStation 4 within its first week on sale in Japan, placing it at number one on the all-format sales chart. PES 2018 Mobile exceeded 150 million downloads as of August 2018. During May–July 2018 the game was downloaded millions of times and grossed substantial revenue. The game also grossed significant revenue in Japan during 2018. Accolades: Eurogamer ranked the game 43rd on their list of the "Top 50 Games of 2017". The game won "Best Sports Game" at the Gamescom 2017 Awards and was nominated for "Best Sports Game" at the Game Critics Awards, "Best Multiplayer Game" at the 35th Golden Joystick Awards, "Best Sports/Racing Game" at The Game Awards 2017, and "Best Sports/Driving Game" at the Titanium Awards. It won "Best Gameplay" at Game Informer's 2017 Sports Game of the Year Awards and was nominated for "People's Choice" at the Italian Video Game Awards. International competition: Pro Evolution Soccer was used as part of the electronic sports demonstration event at the 2018 Asian Games held in Indonesia. Pro Evolution Soccer 2018 was the title used in the event. Eight countries were able to participate after qualifying through their respective regional qualifiers, with Indonesia automatically qualifying as the host. PES League, FIFA 18, Association football video games, Konami games, Multiplayer and single-player video games, PlayStation 3 games, PlayStation 4 games, 2017 video games, 2018 video games, Sports video games set in France, Sports video games set in Italy, Video games set in 2017, Video games set in 2018, Video games set in Brazil, Video games set in Chile, Video games set in England, Video games set in Europe, Sports video games set in Germany, Video games set in Switzerland, Windows games, Xbox 360 games, Xbox One games, PlayStation 4 Pro enhanced games, Sports video games with career mode, Video games developed in Japan, Fox Engine games
Pro Evolution Soccer two thousand eighteen (abbreviated PES two thousand eighteen) is a sports video game developed and published by Konami for Microsoft Windows, PlayStation three, PlayStation four, Xbox three hundred sixty, Xbox one, Android, and iOS. The game is the seventeenth installment in the Pro Evolution Soccer series and was released worldwide in September two thousand seventeen. It was the final PES game released for the PlayStation three and Xbox three hundred sixty consoles and the last to feature the UEFA Champions League, UEFA Europa League, and UEFA Super Cup licenses, as well as the Borussia Dortmund partnership. A mobile version, PES two thousand eighteen Mobile, was released for iOS and Android and had exceeded one hundred fifty million downloads as of August two thousand eighteen. PES two thousand eighteen was succeeded by Pro Evolution Soccer two thousand nineteen. Konami kept the theme of the previous release and announced a special "Barcelona Edition" along with pre-order bonus content for both digital and physical versions. Barcelona, Atlético Madrid, Borussia Dortmund, and Liverpool were confirmed as licensed at E three two thousand seventeen. FC Schalke zero four, Valencia, Fulham, and the Brazil national football team were also licensed, and the France national football team license was confirmed in the online beta. Konami released a demo on August thirty, two thousand seventeen, for PlayStation three, PlayStation four, Xbox three hundred sixty, and Xbox one; the demo included limited stadiums, clubs, and features. Konami did not develop a Nintendo Switch version but has said it is open to porting future games in the series. The first data pack, Data Pack one, launched on October five, two thousand seventeen, and featured one hundred seventeen player face updates, ten new boots, updated backboards in Master League for Barcelona, and over three thousand new player thumbnails. Data Pack two point zero was released on November fifteen, two thousand seventeen; it added Arsenal's Emirates Stadium, the Estadio Nacional in Santiago, Chile, new boots, and new player faces, but also removed licenses for Avaí, Fluminense, São Paulo, and Vasco da Gama players, replacing them with generic players. The trailer for the pack was released on the same day. When PES two thousand twelve was released, it introduced new ball intelligence, improved positioning into space, and wingers attempting overlaps. In two thousand thirteen, G-cluster announced that PES had introduced innovative improvements to player facial identification, and patch notes and updates were provided to assist new players. Pro Evolution Soccer two thousand eighteen received generally favorable reviews, according to Metacritic; IGN awarded it a nine point two out of ten, calling it "Amazing" and saying, "Once again, PES has set an incredibly high level of quality for other sports games to try and match." GameSpot called it "the most satisfying football game ever made" and said its excellent on-pitch gameplay gives it an edge over FIFA eighteen. Commercial performance: Pro Evolution Soccer two thousand eighteen sold sixty four thousand three hundred forty two copies on PlayStation four within its first week on sale in Japan, placing it at number one on the all-format sales chart. PES two thousand eighteen Mobile exceeded one hundred fifty million downloads as of August two thousand eighteen. During May–July two thousand eighteen the game was downloaded millions of times and grossed substantial revenue. The game also grossed significant revenue in Japan during two thousand eighteen. Accolades: Eurogamer ranked the game forty-third on their list of the "Top fifty Games of two thousand seventeen". The game won "Best Sports Game" at the Gamescom two thousand seventeen Awards and was nominated for "Best Sports Game" at the Game Critics Awards, "Best Multiplayer Game" at the thirty-fifth Golden Joystick Awards, "Best Sports/Racing Game" at The Game Awards two thousand seventeen, and "Best Sports/Driving Game" at the Titanium Awards. It won "Best Gameplay" at Game Informer's two thousand seventeen Sports Game of the Year Awards and was nominated for "People's Choice" at the Italian Video Game Awards. International competition: Pro Evolution Soccer was used as part of the electronic sports demonstration event at the two thousand eighteen Asian Games held in Indonesia. Pro Evolution Soccer two thousand eighteen was the title used in the event. Eight countries were able to participate after qualifying through their respective regional qualifiers, with Indonesia automatically qualifying as the host. PES League, FIFA eighteen, Association football video games, Konami games, Multiplayer and single-player video games, PlayStation three games, PlayStation four games, two thousand seventeen video games, two thousand eighteen video games, Sports video games set in France, Sports video games set in Italy, Video games set in two thousand seventeen, Video games set in two thousand eighteen, Video games set in Brazil, Video games set in Chile, Video games set in England, Video games set in Europe, Sports video games set in Germany, Video games set in Switzerland, Windows games, Xbox three hundred sixty games, Xbox one games, PlayStation four Pro enhanced games, Sports video games with career mode, Video games developed in Japan, Fox Engine games.
long_en_304
wiki_en
771
en
The 3 September 1843 Revolution (N.S. 15 September) was an uprising by the Hellenic Army in Athens, supported by large sections of the population, against the autocratic rule of King Otto. The rebels, led by veterans of the Greek War of Independence, demanded a constitution and the departure of the Bavarian officials who dominated the government. The revolution succeeded, ushering in a period of constitutional monarchy (under the 1844 constitution) and universal suffrage in Greece. Background: During the War of Independence, the Greek rebels had passed a series of liberal and progressive constitutions on which the war's provisional governments were based. With the establishment of the monarchy in 1832 and the arrival of the Bavarian prince Otto as king, however, these liberal institutions were discarded. For the next ten years, Otto and his mainly Bavarian officials ruled in an autocratic manner, causing widespread resentment among a people who had just been liberated from foreign rule. The "Bavarocracy" (Βαυαροκρατία), as it was called—intentionally recalling the periods of "Francocracy" and "Turkocracy"—even extended to the use of German alongside Greek in the state administration. Greek politicians constantly demanded an end to this state of affairs. They wanted the Bavarians, above all the much‑despised Major Hess, sent back to their country and a constitution to be granted. However, they did not question the monarchy itself or the power of the king. Indeed, they did not wish to impose a constitution but demanded that the king grant them one. These demands grew ever stronger as time passed and cut across the political spectrum: the French, English, and Russian parties all voiced them. The king's repeated refusals to yield led to radicalization. Therefore, the politicians resorted to conspiracy, which was not a new form of political action in Greece — it had preceded and occurred during the War of Independence. The first Greek governments, such as that of John Capodistria, had to confront conspiracies, which had never really disappeared. However, this movement was much more important and came out into the open on 3 September 1843. The principal conspirators were Yannis Makriyannis, Andreas Metaxas, Andreas Londos, and Michael Soutzos. They had managed to convince certain officers to join their side, chief among them Colonel Dimitrios Kallergis (commander of the Athens cavalry), Colonel Nikolaos Skarvelis (commander of the Athens infantry), and Colonel Spyromilios (commander of the Military Academy). Thus, the conspirators were certain to have army support. Their idea was to act quickly to present the palace with a fait accompli. A first date was chosen: 25 March 1844, the anniversary of the uprising against the Ottomans. The constitution thus appeared as the logical and necessary consequence of independence, but the conspiracy was poorly kept. Yannis Makriyannis, for example, spent his time recruiting new conspirators and in the process exposed the plot. It was decided to move more quickly to action at the beginning of September 1843. On the night of 2 September 1843 it was learned that the names of the conspirators were known to the police, and incidents took place around Makriyannis's home. Kallergis therefore acted on his own initiative: he went to the barracks, gathered his men, and headed for the Old Royal Palace. At the same time he ordered the gates of the Medrese Prison to be opened. Captain Schinas, commander of the Athens artillery, received orders to suppress the nascent insurrection but chose to join the movement. The soldiers arrived at the Old Royal Palace and shouted "Long live the Constitution!" beneath the king's windows. Otto yielded to the demands and granted the 1844 Constitution; the Council of State had already prepared the constitution in anticipation of the coup. The king then asked Metaxas to form a new government and to summon a new National Assembly, which met on 10 November (OS) / 20 November (NS). The troops returned to their barracks, acclaiming the king as a constitutional monarch. The coup was bloodless, and France and the United Kingdom accepted these changes without difficulty. For the French of the July Monarchy era, 3 September 1843 recalled their Revolution of 1830. For the British, the Glorious Revolution of 1688 remained the liberal model par excellence in the nineteenth century. Only Russia condemned the movement because of its autocratic, authoritarian, and consequently anti‑liberal nature. The assembly designated a constitutional commission, and a constitution was proclaimed in March 1844. Since then, the square in front of the Old Royal Palace has been renamed Constitution Square (Syntagma Square in Greek). Brunet de Presle and A. Blanchet, La Grèce depuis la conquête romaine jusqu'à nos jours, Firmin Didot, Paris, 1860.
The three September one thousand eight hundred forty-three Revolution (N.S. fifteen September) was an uprising by the Hellenic Army in Athens, supported by large sections of the population, against the autocratic rule of King Otto. The rebels, led by veterans of the Greek War of Independence, demanded a constitution and the departure of the Bavarian officials who dominated the government. The revolution succeeded, ushering in a period of constitutional monarchy (under the one thousand eight hundred forty-four constitution) and universal suffrage in Greece. Background: During the War of Independence, the Greek rebels had passed a series of liberal and progressive constitutions on which the war's provisional governments were based. With the establishment of the monarchy in one thousand eight hundred thirty-two and the arrival of the Bavarian prince Otto as king, however, these liberal institutions were discarded. For the next ten years, Otto and his mainly Bavarian officials ruled in an autocratic manner, causing widespread resentment among a people who had just been liberated from foreign rule. The "Bavarocracy" (Βαυαροκρατία), as it was called—intentionally recalling the periods of "Francocracy" and "Turkocracy"—even extended to the use of German alongside Greek in the state administration. Greek politicians constantly demanded an end to this state of affairs. They wanted the Bavarians, above all the much‑despised Major Hess, sent back to their country and a constitution to be granted. However, they did not question the monarchy itself or the power of the king. Indeed, they did not wish to impose a constitution but demanded that the king grant them one. These demands grew ever stronger as time passed and cut across the political spectrum: the French, English, and Russian parties all voiced them. The king's repeated refusals to yield led to radicalization. Therefore, the politicians resorted to conspiracy, which was not a new form of political action in Greece — it had preceded and occurred during the War of Independence. The first Greek governments, such as that of John Capodistria, had to confront conspiracies, which had never really disappeared. However, this movement was much more important and came out into the open on three September one thousand eight hundred forty three. The principal conspirators were Yannis Makriyannis, Andreas Metaxas, Andreas Londos, and Michael Soutzos. They had managed to convince certain officers to join their side, chief among them Colonel Dimitrios Kallergis (commander of the Athens cavalry), Colonel Nikolaos Skarvelis (commander of the Athens infantry), and Colonel Spyromilios (commander of the Military Academy). Thus, the conspirators were certain to have army support. Their idea was to act quickly to present the palace with a fait accompli. A first date was chosen: twenty five March one thousand eight hundred forty four, the anniversary of the uprising against the Ottomans. The constitution thus appeared as the logical and necessary consequence of independence, but the conspiracy was poorly kept. Yannis Makriyannis, for example, spent his time recruiting new conspirators and in the process exposed the plot. It was decided to move more quickly to action at the beginning of September one thousand eight hundred forty three. On the night of two September one thousand eight hundred forty three it was learned that the names of the conspirators were known to the police, and incidents took place around Makriyannis's home. Kallergis therefore acted on his own initiative: he went to the barracks, gathered his men, and headed for the Old Royal Palace. At the same time he ordered the gates of the Medrese Prison to be opened. Captain Schinas, commander of the Athens artillery, received orders to suppress the nascent insurrection but chose to join the movement. The soldiers arrived at the Old Royal Palace and shouted "Long live the Constitution!" beneath the king's windows. Otto yielded to the demands and granted the one thousand eight hundred forty four Constitution; the Council of State had already prepared the constitution in anticipation of the coup. The king then asked Metaxas to form a new government and to summon a new National Assembly, which met on ten November (OS) / twenty November (NS). The troops returned to their barracks, acclaiming the king as a constitutional monarch. The coup was bloodless, and France and the United Kingdom accepted these changes without difficulty. For the French of the July Monarchy era, three September one thousand eight hundred forty three recalled their Revolution of one thousand eight hundred thirty. For the British, the Glorious Revolution of one thousand six hundred eighty eight remained the liberal model par excellence in the nineteenth century. Only Russia condemned the movement because of its autocratic, authoritarian, and consequently anti‑liberal nature. The assembly designated a constitutional commission, and a constitution was proclaimed in March one thousand eight hundred forty four. Since then, the square in front of the Old Royal Palace has been renamed Constitution Square (Syntagma Square in Greek). Brunet de Presle and A. Blanchet, La Grèce depuis la conquête romaine jusqu'à nos jours, Firmin Didot, Paris, one thousand eight hundred sixty.
long_en_175
paper_en
2,018
en
Since both A with subscript r equals 8 and A with subscript r equals 64 are learned using the same pre-trained model, this indicates that the top singular-vector directions of A with subscript r equals 8 and A with subscript r equals 64 are the most useful, while other directions potentially contain mostly random noises accumulated during training. Hence, the adaptation matrix can indeed have a very low rank. Subspace similarity between different random seeds. We further confirm this by plotting the normalized subspace similarity between two randomly seeded runs with r=64. delta Wq appears to have a higher "intrinsic rank" than delta Wv, since more common singular value directions are learned by both runs for delta Wq, which is in line with our empirical observation. As a comparison, we also plot two random Gaussian matrices, which do not share any common singular value directions with each other. How Does the Adaptation Matrix delta W Compare to W? We further investigate the relationship between delta W and W. In particular, does delta W highly correlate with W? (Or mathematically, is delta W mostly contained in the top singular directions of W?) Also, how "large" is delta W comparing to its corresponding directions in W? This can shed light on the underlying mechanism for adapting pre-trained language models. To answer these questions, we project W onto the r-dimensional subspace of delta W by computing U transpose W V transpose, with U and V being the left and right singular-vector matrices of delta W. Then, we compare the Frobenius norm between the Frobenius norm of U transpose W V transpose and the Frobenius norm of W. As a comparison, we also compute the Frobenius norm of U transpose W V transpose by replacing U,V with the top r singular vectors of W or a random matrix. We draw several conclusions. First, delta W has a stronger correlation with W compared to a random matrix, indicating that delta W amplifies some features that are already in W. Second, instead of repeating the top singular directions of W, delta W only amplifies directions that are not emphasized in W. Third, the amplification factor is rather huge: 21.5, which is approximately 6.91 divided by 0.32, for r equals 4. This suggests that the low-rank adaptation matrix potentially amplifies the important features for specific downstream tasks that were learned but not emphasized in the general pre-training model. Conclusion and Future Work Fine-tuning enormous language models is prohibitively expensive in terms of the hardware required and the storage/switching cost for hosting independent instances for different tasks. We propose LoRA, an efficient adaptation strategy that neither introduces inference latency nor reduces input sequence length while retaining high model quality. Importantly, it allows for quick task-switching when deployed as a service by sharing the vast majority of the model parameters. While we focused on Transformer language models, the proposed principles are generally applicable to any neural networks with dense layers. There are many directions for future works. 1) LoRA can be combined with other efficient adaptation methods, potentially providing orthogonal improvement. 2) The mechanism behind fine-tuning or LoRA is far from clear -- how are features learned during pre-training transformed to do well on downstream tasks? We believe that LoRA makes it more tractable to answer this than full fine-tuning. 3) We mostly depend on heuristics to select the weight matrices to apply LoRA to. Are there more principled ways to do it? 4) Finally, the rank-deficiency of delta W suggests that W could be rank-deficient as well, which can also be a source of inspiration for future works. Appendix Large Language Models Still Need Parameter Updates Few-shot learning, or prompt engineering, is very advantageous when we only have a handful of training samples. However, in practice, we can often afford to curate a few thousand or more training examples for performance-sensitive applications. Fine-tuning improves the model performance drastically compared to few-shot learning on datasets large and small. We take the GPT-3 few-shot result on RTE from the GPT-3 paper. For MNLI-matched, we use two demonstrations per class and six in-context examples in total. Inference Latency Introduced by Adapter Layers Adapter layers are external modules added to a pre-trained model in a sequential manner, whereas our proposal, LoRA, can be seen as external modules added in a parallel manner. Consequently, adapter layers must be computed in addition to the base model, inevitably introducing additional latency. While as pointed out, the latency introduced by adapter layers can be mitigated when the model batch size and/or sequence length is large enough to full utilize the hardware parallelism. We confirm their observation with a similar latency study on GPT-2 medium and point out that there are scenarios, notably online inference where the batch size is small, where the added latency can be significant. We measure the latency of a single forward pass on an NVIDIA Quadro RTX8000 by averaging over 100 trials. We vary the input batch size, sequence length, and the adapter bottleneck dimension r. We test two adapter designs: the original one by Houlsby et al., which we call Adapter H, and a recent, more efficient variant by Lin et al., which we call Adapter L. We plot the slow-down in percentage compared to the no-adapter baseline. Dataset Details GLUE Benchmark is a wide-ranging collection of natural language understanding tasks. It includes MNLI (inference), SST-2 (sentiment analysis), MRPC (paraphrase detection), CoLA (linguistic acceptability), QNLI (inference), QQP (question-answering), RTE (inference), and STS-B (textual similarity). The broad coverage makes GLUE benchmark a standard metric to evaluate NLU models such as RoBERTa and DeBERTa. The individual datasets are released under different permissive licenses. WikiSQL is introduced and contains 56,355 training and 8,421 validation examples. The task is to generate SQL queries from natural language questions and table schemata. We encode context as the set of table schema and query and target as the SQL. The dataset is release under the BSD 3-Clause License. SAMSum is introduced and contains 14,732 training and 819 test examples. It consists of staged chat conversations between two people and corresponding abstractive summaries written by linguists. We encode context as newline concatenated utterances followed by a double newline, and target as the summary. The dataset is released under the non-commercial licence: Creative Commons BY-NC-ND 4.0. E2E NLG Challenge was first introduced as a dataset for training end-to-end, data-driven natural language generation systems and is commonly used for data-to-text evaluation. The E2E dataset consists of roughly 42,000 training, 4,600 validation, and 4,600 test examples from the restaurant domain. Each source table used as input can have multiple references. Each sample input (x, y) consists of a sequence of slot-value pairs, along with a corresponding natural language reference text. The dataset is released under Creative Commons BY-NC-SA 4.0. DART is an open-domain data-to-text dataset. DART inputs are structured as sequences of ENTITY, RELATION, ENTITY triples. With around 82K examples in total, DART is a significantly larger and more complex data-to-text task compared to E2E. The dataset is released under the MIT license. WebNLG is another commonly used dataset for data-to-text evaluation. With around 22K examples in total WebNLG comprises 14 distinct categories, nine of which are seen during training. Since five of the 14 total categories are not seen during training, but are represented in the test set, evaluation is typically broken out by "seen" categories (S), "unseen" categories (U) and "all" (A). Each input example is represented by a sequence of SUBJECT, PROPERTY, OBJECT triples. The dataset is released under Creative Commons BY-NC-SA 4.0. Hyperparameters Used in Experiments RoBERTa We train using AdamW with a linear learning rate decay schedule. We sweep learning rate, number of training epochs, and batch size for LoRA. Following Liu et al., we initialize the LoRA modules to our best MNLI checkpoint when adapting to MRPC, RTE, and STS-B, instead of the usual initialization; the pre-trained model stays frozen for all tasks. We report the median over 5 random seeds; the result for each run is taken from the best epoch. For a fair comparison with the setup in Houlsby et al. and Pfeiffer et al., we restrict the model sequence length to 128 and used a fixed batch size for all tasks. Importantly, we start with the pre-trained RoBERTa large model when adapting to MRPC, RTE, and STS-B, instead of a model already adapted to MNLI. The runs with this restricted setup are marked with a dagger. DeBERTa We again train using AdamW with a linear learning rate decay schedule. Following He et al., we tune learning rate, dropout probability, warm-up steps, and batch size. We use the same model sequence length used by He et al. to keep our comparison fair. Following He et al., we initialize the LoRA modules to our best MNLI checkpoint when adapting to MRPC, RTE, and STS-B, instead of the usual initialization; the pre-trained model stays frozen for all tasks. We report the median over 5 random seeds; the result for each run is taken from the best epoch. GPT-2 We train all of our GPT-2 models using AdamW with a linear learning rate schedule for 5 epochs. We use the batch size, learning rate, and beam search beam size described in Li and Liang. Accordingly, we also tune the above hyperparameters for LoRA. We report the mean over 3 random seeds; the result for each run is taken from the best epoch. GPT-3 For all GPT-3 experiments, we train using AdamW for 2 epochs with a batch size of 128 samples and a weight decay factor of 0.1. We use a sequence length of 384 for WikiSQL, 768 for MNLI, and 2048 for SAMSum. We tune learning rate for all method-dataset combinations. For prefix-embedding tuning, we find the optimal lp and li to be 256 and 8, respectively, totalling 3.2M trainable parameters. We use lp=8 and li=8 for prefix-layer tuning with 20.2M trainable parameters to obtain the overall best performance. We present two parameter budgets for LoRA: 4.7M and 37.7M. We report the best validation performance from each run. Combining LoRA with Prefix Tuning LoRA can be naturally combined with existing prefix-based approaches. In this section, we evaluate two combinations of LoRA and variants of prefix-tuning on WikiSQL and MNLI. LoRA+PrefixEmbed (LoRA+PE) combines LoRA with prefix-embedding tuning, where we insert lp plus li special tokens whose embeddings are treated as trainable parameters. LoRA+PrefixLayer (LoRA+PL) combines LoRA with prefix-layer tuning. We also insert lp plus li special tokens; however, instead of letting the hidden representations of these tokens evolve naturally, we replace them after every Transformer block with an input agnostic vector. Thus, both the embeddings and subsequent Transformer block activations are treated as trainable parameters. We show the evaluation results of LoRA+PE and LoRA+PL on WikiSQL and MultiNLI. First of all, LoRA+PE significantly outperforms both LoRA and prefix-embedding tuning on WikiSQL, which indicates that LoRA is somewhat orthogonal to prefix-embedding tuning. On MultiNLI, the combination of LoRA+PE doesn't perform better than LoRA, possibly because LoRA on its own already achieves performance comparable to the human baseline. Secondly, we notice that LoRA+PL performs slightly worse than LoRA even with more trainable parameters. We attribute this to the fact that prefix-layer tuning is very sensitive to the choice of learning rate and thus makes the optimization of LoRA weights more difficult in LoRA+PL. Additional Empirical Experiments Additional Experiments on GPT-2 We also repeat our experiment on DART and WebNLG following the setup of Li and Liang. The result is shown.
Since both A r equals eight and A r equals sixty four are learned using the same pre-trained model, this indicates that the top singular-vector directions of A r equals eight and A r equals sixty four are the most useful, while other directions potentially contain mostly random noises accumulated during training. Hence, the adaptation matrix can indeed have a very low rank. Subspace similarity between different random seeds. We further confirm this by plotting the normalized subspace similarity between two randomly seeded runs with r equals sixty four. delta W q appears to have a higher "intrinsic rank" than delta W v, since more common singular value directions are learned by both runs for delta W q, which is in line with our empirical observation. As a comparison, we also plot two random Gaussian matrices, which do not share any common singular value directions with each other. How Does the Adaptation Matrix delta W Compare to W? We further investigate the relationship between delta W and W. In particular, does delta W highly correlate with W? (Or mathematically, is delta W mostly contained in the top singular directions of W?) Also, how "large" is delta W comparing to its corresponding directions in W? This can shed light on the underlying mechanism for adapting pre-trained language models. To answer these questions, we project W onto the r-dimensional subspace of delta W by computing U transpose W V transpose, with U and V being the left and right singular-vector matrices of delta W. Then, we compare the Frobenius norm between the Frobenius norm of U transpose W V transpose and the Frobenius norm of W. As a comparison, we also compute the Frobenius norm of U transpose W V transpose by replacing U,V with the top r singular vectors of W or a random matrix. We draw several conclusions. First, delta W has a stronger correlation with W compared to a random matrix, indicating that delta W amplifies some features that are already in W. Second, instead of repeating the top singular directions of W, delta W only amplifies directions that are not emphasized in W. Third, the amplification factor is rather huge: twenty one point five, which is approximately six point nine one divided by zero point three two, for r equals four. This suggests that the low-rank adaptation matrix potentially amplifies the important features for specific downstream tasks that were learned but not emphasized in the general pre-training model. Conclusion and Future Work Fine-tuning enormous language models is prohibitively expensive in terms of the hardware required and the storage/switching cost for hosting independent instances for different tasks. We propose LoRA, an efficient adaptation strategy that neither introduces inference latency nor reduces input sequence length while retaining high model quality. Importantly, it allows for quick task-switching when deployed as a service by sharing the vast majority of the model parameters. While we focused on Transformer language models, the proposed principles are generally applicable to any neural networks with dense layers. There are many directions for future works. one) LoRA can be combined with other efficient adaptation methods, potentially providing orthogonal improvement. two) The mechanism behind fine-tuning or LoRA is far from clear -- how are features learned during pre-training transformed to do well on downstream tasks? We believe that LoRA makes it more tractable to answer this than full fine-tuning. three) We mostly depend on heuristics to select the weight matrices to apply LoRA to. Are there more principled ways to do it? four) Finally, the rank-deficiency of delta W suggests that W could be rank-deficient as well, which can also be a source of inspiration for future works. Appendix Large Language Models Still Need Parameter Updates Few-shot learning, or prompt engineering, is very advantageous when we only have a handful of training samples. However, in practice, we can often afford to curate a few thousand or more training examples for performance-sensitive applications. Fine-tuning improves the model performance drastically compared to few-shot learning on datasets large and small. We take the GPT-3 few-shot result on RTE from the GPT-3 paper. For MNLI-matched, we use two demonstrations per class and six in-context examples in total. Inference Latency Introduced by Adapter Layers Adapter layers are external modules added to a pre-trained model in a sequential manner, whereas our proposal, LoRA, can be seen as external modules added in a parallel manner. Consequently, adapter layers must be computed in addition to the base model, inevitably introducing additional latency. While as pointed out, the latency introduced by adapter layers can be mitigated when the model batch size and/or sequence length is large enough to full utilize the hardware parallelism. We confirm their observation with a similar latency study on GPT two medium and point out that there are scenarios, notably online inference where the batch size is small, where the added latency can be significant. We measure the latency of a single forward pass on an NVIDIA Quadro RTX eight thousand by averaging over one hundred trials. We vary the input batch size, sequence length, and the adapter bottleneck dimension r. We test two adapter designs: the original one by Houlsby et al., which we call Adapter H, and a recent, more efficient variant by Lin et al., which we call Adapter L. We plot the slow-down in percentage compared to the no-adapter baseline. Dataset Details GLUE Benchmark is a wide-ranging collection of natural language understanding tasks. It includes MNLI (inference), SST two (sentiment analysis), MRPC (paraphrase detection), CoLA (linguistic acceptability), QNLI (inference), QQP (question-answering), RTE (inference), and STS B (textual similarity). The broad coverage makes GLUE benchmark a standard metric to evaluate NLU models such as RoBERTa and DeBERTa. The individual datasets are released under different permissive licenses. WikiSQL is introduced and contains fifty six thousand three hundred fifty five training and eight thousand four hundred twenty one validation examples. The task is to generate SQL queries from natural language questions and table schemata. We encode context as the set of table schema and query and target as the SQL. The dataset is release under the BSD three Clause License. SAMSum is introduced and contains fourteen thousand seven hundred thirty two training and eight hundred nineteen test examples. It consists of staged chat conversations between two people and corresponding abstractive summaries written by linguists. We encode context as newline concatenated utterances followed by a double newline, and target as the summary. The dataset is released under the non-commercial licence: Creative Commons BY-NC-ND four point zero. E two E NLG Challenge was first introduced as a dataset for training end-to-end, data-driven natural language generation systems and is commonly used for data-to-text evaluation. The E two E dataset consists of roughly forty two thousand training, four thousand six hundred validation, and four thousand six hundred test examples from the restaurant domain. Each source table used as input can have multiple references. Each sample input (x, y) consists of a sequence of slot-value pairs, along with a corresponding natural language reference text. The dataset is released under Creative Commons BY-NC-SA four point zero. DART is an open-domain data-to-text dataset. DART inputs are structured as sequences of ENTITY, RELATION, ENTITY triples. With around eighty two K examples in total, DART is a significantly larger and more complex data-to-text task compared to E2E. The dataset is released under the MIT license. WebNLG is another commonly used dataset for data-to-text evaluation. With around twenty two K examples in total WebNLG comprises fourteen distinct categories, nine of which are seen during training. Since five of the fourteen total categories are not seen during training, but are represented in the test set, evaluation is typically broken out by "seen" categories (S), "unseen" categories (U) and "all" (A). Each input example is represented by a sequence of SUBJECT, PROPERTY, OBJECT triples. The dataset is released under Creative Commons BY-NC-SA four point zero. Hyperparameters Used in Experiments RoBERTa We train using AdamW with a linear learning rate decay schedule. We sweep learning rate, number of training epochs, and batch size for LoRA. Following Liu et al., we initialize the LoRA modules to our best MNLI checkpoint when adapting to MRPC, RTE, and STS-B, instead of the usual initialization; the pre-trained model stays frozen for all tasks. We report the median over five random seeds; the result for each run is taken from the best epoch. For a fair comparison with the setup in Houlsby et al. and Pfeiffer et al., we restrict the model sequence length to one hundred twenty eight and used a fixed batch size for all tasks. Importantly, we start with the pre-trained RoBERTa large model when adapting to MRPC, RTE, and STS-B, instead of a model already adapted to MNLI. The runs with this restricted setup are marked with a dagger. DeBERTa We again train using AdamW with a linear learning rate decay schedule. Following He et al., we tune learning rate, dropout probability, warm-up steps, and batch size. We use the same model sequence length used by He et al. to keep our comparison fair. Following He et al., we initialize the LoRA modules to our best MNLI checkpoint when adapting to MRPC, RTE, and STS-B, instead of the usual initialization; the pre-trained model stays frozen for all tasks. We report the median over five random seeds; the result for each run is taken from the best epoch. GPT two We train all of our GPT two models using AdamW with a linear learning rate schedule for five epochs. We use the batch size, learning rate, and beam search beam size described in Li and Liang. Accordingly, we also tune the above hyperparameters for LoRA. We report the mean over three random seeds; the result for each run is taken from the best epoch. GPT three For all GPT three experiments, we train using AdamW for two epochs with a batch size of one hundred twenty eight samples and a weight decay factor of zero point one. We use a sequence length of three hundred eighty four for WikiSQL, seven hundred sixty eight for MNLI, and two thousand forty eight for SAMSum. We tune learning rate for all method-dataset combinations. For prefix-embedding tuning, we find the optimal lp and li to be two hundred fifty six and eight, respectively, totalling three point two million trainable parameters. We use lp equals eight and li equals eight for prefix-layer tuning with twenty point two million trainable parameters to obtain the overall best performance. We present two parameter budgets for LoRA: four point seven million and thirty seven point seven million. We report the best validation performance from each run. Combining LoRA with Prefix Tuning LoRA can be naturally combined with existing prefix-based approaches. In this section, we evaluate two combinations of LoRA and variants of prefix-tuning on WikiSQL and MNLI. LoRA+PrefixEmbed (LoRA+PE) combines LoRA with prefix-embedding tuning, where we insert lp plus li special tokens whose embeddings are treated as trainable parameters. LoRA+PrefixLayer (LoRA+PL) combines LoRA with prefix-layer tuning. We also insert lp plus li special tokens; however, instead of letting the hidden representations of these tokens evolve naturally, we replace them after every Transformer block with an input agnostic vector. Thus, both the embeddings and subsequent Transformer block activations are treated as trainable parameters. We show the evaluation results of LoRA+PE and LoRA+PL on WikiSQL and MultiNLI. First of all, LoRA plus PE significantly outperforms both LoRA and prefix-embedding tuning on WikiSQL, which indicates that LoRA is somewhat orthogonal to prefix-embedding tuning. On MultiNLI, the combination of LoRA plus PE doesn't perform better than LoRA, possibly because LoRA on its own already achieves performance comparable to the human baseline. Secondly, we notice that LoRA plus PL performs slightly worse than LoRA even with more trainable parameters. We attribute this to the fact that prefix-layer tuning is very sensitive to the choice of learning rate and thus makes the optimization of LoRA weights more difficult in LoRA plus PL. Additional Empirical Experiments Additional Experiments on GPT two We also repeat our experiment on DART and WebNLG following the setup of Li and Liang. The result is shown.
long_en_327
poet_en
998
en
What do all the above pictures have in common? Notice the furry red thing that Maia is holding in all of them. That is her "Bah-Boo" (Elmo doll). That thing is probably so germ-infested that if CPS ever got their hands on it, they would take my children away from me just because I allow my daughter to sleep with that nasty doll. Bah-Boo was bought for Noah by his grandma about eight or nine months ago, but gradually Maia began to take more interest in it and it went everywhere with her. I've run over it with the stroller quite a few times after it fell on the ground. It has had many different types of food and drink spilled on it, including milk; it has been thrown up on; and it has come into contact with a wet diaper. Have I washed it? Well, when I looked at the tag it said "surface washable only," which to me meant I could only wipe it down with a towel, which would do absolutely nothing except add more germs. My other reason for not washing it is that Maia loves it so much because of the smell. Every time she goes to bed she grabs Bah-Boo's hand, puts it up to her nose, sticks her thumb in her mouth, and is out like a light in no time. If I wash it, she might not ever sleep again because she obviously finds the smells of vomit, sour milk, and pee soothing. Last week I even tried buying a duplicate doll to see if she would take it. Nope—she wanted nothing to do with it. Today I finally decided that it was time to wash the nasty doll. During our river trip it had fallen on the floor of the casino and picked up a bit of a cigarette-ash smell, and I felt guilty letting my daughter sleep with that unwashed thing any longer. It came out of the dryer with its fur all clumped together, its eyes scraped off, and smelling fresh and clean. At nap time I crossed my fingers, put her in her crib, and set both the cleaned Bah-Boo and the duplicate Bah-Boo next to her. She immediately went for the freshly washed one, grabbed its hand, put it to her nose, gave a confused look, tried again, and then held the doll up to me. This morning was definitely not a good one for my offspring. It is Bup's birthday today, and Noah was supposed to go on stage during the church service to give her a present. I dressed him in his nice black button-up shirt and gave him a pep talk about what he was supposed to do. He whined and acted up all morning and made me extremely nervous. When I reached the point of doing anything to keep him from melting down, I gave him his milk even though I knew it was a bad idea. Sure enough, the sippy-cup top popped off and the milk spilled all over his nice black shirt. I rushed him into the bathroom, laughed so he wouldn't cry, and rinsed him with water. He was supposed to go on stage in four minutes. I begged him not to cry, told him he needed to be a good boy for Bup, and rushed him to the side of the stage. He began to whine because he wanted to color on her present. I found a pen nearby and let him color on it. He was satisfied. When it was his time to go onstage and give Bup her present, he walked up slower than a turtle with me harshly whispering "go" behind him. He handed her the piece of paper and walked off. No hug, no kiss, nothing! But I have to say I was thrilled — he made it up without a tantrum and with no one noticing his milk-soaked shirt. As soon as we left the main auditorium, the meltdown began. There was no bribe or threat that could calm him; he just needed his bed. Once we picked up Maia from her class, we had double trouble. She would not let anyone come near her because she was so cranky. We had to get out of there fast. We did, and when we got home we put the kids straight to bed. When they awoke two hours later, I was amazed at the angels that emerged from my kids' room: they were happy, respectful, well-mannered, and as pleasant as could be. I am in awe of the power of the nap and want everyone to know that my kids will nap until they move out of my house at eighteen. It seems my children go to sleep as little demons and wake up as angels. Will I ever be able to use the restroom in my own house by myself again, or will I always have little ones follow me in and watch me closely? I wish I could torch all these stupid ants and watch them shrivel up and die. Hmmm... Are there earplugs for little kids? I wonder if my kids would sleep longer if I put earplugs in their ears. There must be something rotting inside my son's belly; it can't be normal for a two-year-old's farts to make my eyes water and trigger my gag reflex. I seriously need to look into anger-management classes for my one-year-old. She just hit and kicked the wall and then tried to bite it after she accidentally ran into it. I feel sorry for any child who ever bullies her. This weekend, on our four-hour drive up to Laughlin, the kids started to get restless, and I had run out of distractions, so I decided to buy myself five minutes of peace by telling them a Bible story.
What do all the above pictures have in common? Notice the furry red thing that Maia is holding in all of them. That is her "Bah-Boo" (Elmo doll). That thing is probably so germ-infested that if CPS ever got their hands on it, they would take my children away from me just because I allow my daughter to sleep with that nasty doll. Bah-Boo was bought for Noah by his grandma about eight or nine months ago, but gradually Maia began to take more interest in it and it went everywhere with her. I've run over it with the stroller quite a few times after it fell on the ground. It has had many different types of food and drink spilled on it, including milk; it has been thrown up on; and it has come into contact with a wet diaper. Have I washed it? Well, when I looked at the tag it said "surface washable only," which to me meant I could only wipe it down with a towel, which would do absolutely nothing except add more germs. My other reason for not washing it is that Maia loves it so much because of the smell. Every time she goes to bed she grabs Bah-Boo's hand, puts it up to her nose, sticks her thumb in her mouth, and is out like a light in no time. If I wash it, she might not ever sleep again because she obviously finds the smells of vomit, sour milk, and pee soothing. Last week I even tried buying a duplicate doll to see if she would take it. Nope—she wanted nothing to do with it. Today I finally decided that it was time to wash the nasty doll. During our river trip it had fallen on the floor of the casino and picked up a bit of a cigarette-ash smell, and I felt guilty letting my daughter sleep with that unwashed thing any longer. It came out of the dryer with its fur all clumped together, its eyes scraped off, and smelling fresh and clean. At nap time I crossed my fingers, put her in her crib, and set both the cleaned Bah-Boo and the duplicate Bah-Boo next to her. She immediately went for the freshly washed one, grabbed its hand, put it to her nose, gave a confused look, tried again, and then held the doll up to me. This morning was definitely not a good one for my offspring. It is Bup's birthday today, and Noah was supposed to go on stage during the church service to give her a present. I dressed him in his nice black button-up shirt and gave him a pep talk about what he was supposed to do. He whined and acted up all morning and made me extremely nervous. When I reached the point of doing anything to keep him from melting down, I gave him his milk even though I knew it was a bad idea. Sure enough, the sippy-cup top popped off and the milk spilled all over his nice black shirt. I rushed him into the bathroom, laughed so he wouldn't cry, and rinsed him with water. He was supposed to go on stage in four minutes. I begged him not to cry, told him he needed to be a good boy for Bup, and rushed him to the side of the stage. He began to whine because he wanted to color on her present. I found a pen nearby and let him color on it. He was satisfied. When it was his time to go onstage and give Bup her present, he walked up slower than a turtle with me harshly whispering "go" behind him. He handed her the piece of paper and walked off. No hug, no kiss, nothing! But I have to say I was thrilled — he made it up without a tantrum and with no one noticing his milk-soaked shirt. As soon as we left the main auditorium, the meltdown began. There was no bribe or threat that could calm him; he just needed his bed. Once we picked up Maia from her class, we had double trouble. She would not let anyone come near her because she was so cranky. We had to get out of there fast. We did, and when we got home we put the kids straight to bed. When they awoke two hours later, I was amazed at the angels that emerged from my kids' room: they were happy, respectful, well-mannered, and as pleasant as could be. I am in awe of the power of the nap and want everyone to know that my kids will nap until they move out of my house at eighteen. It seems my children go to sleep as little demons and wake up as angels. Will I ever be able to use the restroom in my own house by myself again, or will I always have little ones follow me in and watch me closely? I wish I could torch all these stupid ants and watch them shrivel up and die. Hmmm... Are there earplugs for little kids? I wonder if my kids would sleep longer if I put earplugs in their ears. There must be something rotting inside my son's belly; it can't be normal for a two-year-old's farts to make my eyes water and trigger my gag reflex. I seriously need to look into anger-management classes for my one-year-old. She just hit and kicked the wall and then tried to bite it after she accidentally ran into it. I feel sorry for any child who ever bullies her. This weekend, on our four-hour drive up to Laughlin, the kids started to get restless, and I had run out of distractions, so I decided to buy myself five minutes of peace by telling them a Bible story.
long_en_284
wiki_en
983
en
Peninsula College of Medicine and Dentistry (PCMD) was a medical and dental school in England, run in partnership with the University of Exeter, the University of Plymouth, and the NHS in Devon and Cornwall. In January 2013, the school began to disaggregate into Plymouth University Peninsula Schools of Medicine and Dentistry and the University of Exeter Medical School. The school had campuses at the University of Plymouth, the University of Exeter, the John Bull Building (Derriford Hospital and Plymouth Science Park), the Royal Devon and Exeter Hospital, and the Royal Cornwall Hospital. Teaching of medical students also took place at North Devon District Hospital in Barnstaple, South Devon Healthcare Trust in Torbay, general practices across the region, and a number of community-orientated healthcare settings. History: The Peninsula Medical School was established on 1 August 2000, preceding the dental school by six years, following a successful bid to the government as part of a national expansion of medical student numbers in the UK. The bid was creatively led by Professor Sir John Tooke, who was then in a joint appointment between the University of Exeter and the Royal Devon and Exeter Hospital. Professor Tooke was appointed the school's first dean, a post he held until autumn 2009. His vision and drive were recognised nationally when he was appointed chair of the UK Committee of Heads of Medical Schools and was awarded a knighthood in the 2007 New Year Honours. The school was opened as part of the British government's efforts to train more doctors; other new schools included Brighton and Sussex Medical School, the University of East Anglia Medical School, Hull York Medical School and Keele University School of Medicine. According to league tables in the media, the Peninsula College of Medicine and Dentistry (PCMD) consistently outperformed other new institutions and proved highly competitive with established medical schools. In 2012 the two founding universities of PCMD announced plans to expand independently and build on the provider's national reputation. With an equitable split of student numbers, the University of Exeter created the University of Exeter Medical School (UEMS), while Plymouth University created the Plymouth University Peninsula Schools of Medicine and Dentistry (PUPSMD). The inaugural deans of the new Exeter and Plymouth medical schools were Professor Steve Thornton and Professor Rob Sneyd respectively. Students who had already started their studies at Peninsula Medical School continued and graduated with joint degrees from the two universities, as previous graduates had. Students entering either UEMS or PUPSMD pursued independent degrees from the University of Exeter or Plymouth University. The first intake of 130 undergraduate students commenced on 30 September 2002; from September 2003 the annual intake rose to 167. In January 2006 Peninsula Medical School was awarded funding for further expansion and the UK and overseas places increased, raising the intake to 214 from September 2006 and to 230 from September 2010. For the first two years of the undergraduate programme, students were based at either the University of Exeter or the University of Plymouth, with an emphasis on biomedical sciences taught in the context of relevant clinical problems. From the first week, students learned in various community-based clinical environments. In years three and four, students spent the majority of their time in acute and community-based clinical placements based at one of the school's three main localities: Exeter, Truro, or Plymouth. In the original vision for the Peninsula Medical School, an innovative medical humanities focus was established that included a Special Studies unit in which a wine expert and a perfumer developed students' sensory awareness. During year five, students were attached to clinical apprenticeships with general practitioners and consultants throughout Devon and Cornwall. Research within the college focused on four main themes: Diabetes; Cardiovascular Risk and Ageing; Neuroscience (embracing both neurology and mental health); and Health Services Research and Environment and Human Health. In the 2008 Research Assessment Exercise (RAE), Peninsula Medical School submitted in two Units of Assessment: "Other Hospital Based Clinical Subjects" and "Health Services Research". In "Other Hospital Based Clinical Subjects", 65% of their submission was judged to be of international or world-class quality, ranking Peninsula Medical School 11th of 27 submissions from UK medical schools. Their research in the "Health Services Research" category was also judged to be of high international standard, with 50% of the submission judged as international or world-class, ranking them 13th out of 24 submissions. Peninsula Dental School (PDS) was established on 26 January 2006 following a successful bid to the government as part of a national expansion of dental student numbers in the UK. It was the first dental school to open in the UK for three decades. The Peninsula Dental School was a member of the Dental Schools Council, and its inaugural dean was Professor Liz Kay. The Peninsula Dental School trained 64 dentists a year and offered a joint Bachelor of Dental Surgery (BDS) degree through the Universities of Exeter and Plymouth. The programme was four years long and was designed for science graduates or healthcare professionals. For the first two years of the dental programme, students were based mainly at the University of Plymouth, with an emphasis on core clinical and communication skills. The Peninsula Postgraduate Health Institute (PPHI) contracted with the NHS in Devon and Cornwall to provide taught programmes and research opportunities in medicine, health, and social care, working in collaboration with the NHS. The programmes were provided by the University of Plymouth’s Faculty of Health and Social Work and by schools of the University of Exeter. The Peninsula College of Medicine and Dentistry was represented on the Board of PPHI. The Peninsula Allied Health Collaboration (PAHC) was a separate partnership between the two universities and the University of St Mark and St John, Plymouth. It contracted with the NHS to provide undergraduate programmes in allied health professions such as nursing, occupational therapy and radiography.
Peninsula College of Medicine and Dentistry (PCMD) was a medical and dental school in England, run in partnership with the University of Exeter, the University of Plymouth, and the NHS in Devon and Cornwall. In January two thousand thirteen, the school began to disaggregate into Plymouth University Peninsula Schools of Medicine and Dentistry and the University of Exeter Medical School. The school had campuses at the University of Plymouth, the University of Exeter, the John Bull Building (Derriford Hospital and Plymouth Science Park), the Royal Devon and Exeter Hospital, and the Royal Cornwall Hospital. Teaching of medical students also took place at North Devon District Hospital in Barnstaple, South Devon Healthcare Trust in Torbay, general practices across the region, and a number of community-orientated healthcare settings. History: The Peninsula Medical School was established on one August two thousand, preceding the dental school by six years, following a successful bid to the government as part of a national expansion of medical student numbers in the UK. The bid was creatively led by Professor Sir John Tooke, who was then in a joint appointment between the University of Exeter and the Royal Devon and Exeter Hospital. Professor Tooke was appointed the school's first dean, a post he held until autumn two thousand nine. His vision and drive were recognised nationally when he was appointed chair of the UK Committee of Heads of Medical Schools and was awarded a knighthood in the two thousand and seven New Year Honours. The school was opened as part of the British government's efforts to train more doctors; other new schools included Brighton and Sussex Medical School, the University of East Anglia Medical School, Hull York Medical School and Keele University School of Medicine. According to league tables in the media, the Peninsula College of Medicine and Dentistry (PCMD) consistently outperformed other new institutions and proved highly competitive with established medical schools. In two thousand and twelve the two founding universities of PCMD announced plans to expand independently and build on the provider's national reputation. With an equitable split of student numbers, the University of Exeter created the University of Exeter Medical School (UEMS), while Plymouth University created the Plymouth University Peninsula Schools of Medicine and Dentistry (PUPSMD). The inaugural deans of the new Exeter and Plymouth medical schools were Professor Steve Thornton and Professor Rob Sneyd respectively. Students who had already started their studies at Peninsula Medical School continued and graduated with joint degrees from the two universities, as previous graduates had. Students entering either UEMS or PUPSMD pursued independent degrees from the University of Exeter or Plymouth University. The first intake of one hundred thirty undergraduate students commenced on thirty September two thousand two; from September two thousand three the annual intake rose to one hundred sixty seven. In January two thousand six Peninsula Medical School was awarded funding for further expansion and the UK and overseas places increased, raising the intake to two hundred fourteen from September two thousand six and to two hundred thirty from September two thousand ten. For the first two years of the undergraduate programme, students were based at either the University of Exeter or the University of Plymouth, with an emphasis on biomedical sciences taught in the context of relevant clinical problems. From the first week, students learned in various community-based clinical environments. In years three and four, students spent the majority of their time in acute and community-based clinical placements based at one of the school's three main localities: Exeter, Truro, or Plymouth. In the original vision for the Peninsula Medical School, an innovative medical humanities focus was established that included a Special Studies unit in which a wine expert and a perfumer developed students' sensory awareness. During year five, students were attached to clinical apprenticeships with general practitioners and consultants throughout Devon and Cornwall. Research within the college focused on four main themes: Diabetes; Cardiovascular Risk and Ageing; Neuroscience (embracing both neurology and mental health); and Health Services Research and Environment and Human Health. In the two thousand and eight Research Assessment Exercise (RAE), Peninsula Medical School submitted in two Units of Assessment: "Other Hospital Based Clinical Subjects" and "Health Services Research". In "Other Hospital Based Clinical Subjects", sixty five percent of their submission was judged to be of international or world-class quality, ranking Peninsula Medical School eleventh of twenty seven submissions from UK medical schools. Their research in the "Health Services Research" category was also judged to be of high international standard, with fifty percent of the submission judged as international or world-class, ranking them thirteenth out of twenty four submissions. Peninsula Dental School (PDS) was established on twenty sixth January two thousand and six following a successful bid to the government as part of a national expansion of dental student numbers in the UK. It was the first dental school to open in the UK for three decades. The Peninsula Dental School was a member of the Dental Schools Council, and its inaugural dean was Professor Liz Kay. The Peninsula Dental School trained sixty four dentists a year and offered a joint Bachelor of Dental Surgery (BDS) degree through the Universities of Exeter and Plymouth. The programme was four years long and was designed for science graduates or healthcare professionals. For the first two years of the dental programme, students were based mainly at the University of Plymouth, with an emphasis on core clinical and communication skills. The Peninsula Postgraduate Health Institute (PPHI) contracted with the NHS in Devon and Cornwall to provide taught programmes and research opportunities in medicine, health, and social care, working in collaboration with the NHS. The programmes were provided by the University of Plymouth’s Faculty of Health and Social Work and by schools of the University of Exeter. The Peninsula College of Medicine and Dentistry was represented on the Board of PPHI. The Peninsula Allied Health Collaboration (PAHC) was a separate partnership between the two universities and the University of St Mark and St John, Plymouth. It contracted with the NHS to provide undergraduate programmes in allied health professions such as nursing, occupational therapy and radiography.
long_en_269
wiki_en
1,065
en
Quantum Mistake (originally titled Change Guy) is a manhwa published in 31 volumes between 1998 and 2006. It was written by Son Eun-ho and illustrated by Choi Myung-su. It tells the tale of two boys, Woo-Soo Choi and Kang Too-Jee, whose souls are accidentally switched. On the day a scientist is performing body-teleportation experiments, Woo-Soo Choi and Kang Too-Jee get into a fight after Woo-Soo accidentally pulls down Kang Too-Jee's trousers. As they fight, the scientist—who had been chasing a homeless person she had tried to bribe—runs them over and uses them to test her new teleportation device. The teleportation appears successful; she dumps their bodies and drives off, only to discover the device malfunctioned. She had teleported their bodies but not their minds. When Woo-Soo wakes up, he sees his own body lying on the ground. Realizing their minds have been switched, he carries the unconscious body to the hospital. When Kang Too-Jee, now in Woo-Soo's body, wakes up, he seems to have amnesia and has reverted to a childlike state. Woo-Soo's mother believes the boy in Kang Too-Jee's body caused the incident and chases him out of the hospital. Woo-Soo Choi is able to find out where Kang Too-Jee lives and goes to school, but soon trouble appears in the form of Kang Too-Jee's old rivals and the fighting begins. Characters Woo-Soo Choi: Currently inhabits Kang Too-Jee's body. Extremely studious with exceptional concentration, he has a very kind heart but is often misunderstood because of Kang Too-Jee's personality and past. He becomes a student of the Gyuk Moo-Do martial arts school in order to fight one of his first enemies. Kang Too-Jee: Currently inhabits Woo-Soo Choi's body. Initially suffering from amnesia, he is not seen often until later volumes. He recovers his memory during a fight and confronts Woo-Soo Choi. Afterward, he stays at Woo-Soo Choi's house at Woo-Soo's request so his mother will not be upset. He becomes a Judo student—a discipline he could never previously master but can now because Woo-Soo Choi's body is weak yet extremely flexible (double-jointed). Park Yu-Na: Beautiful and popular among boys, she is afraid of gangsters and troublemakers. She always thought Kang Too-Jee was scary and violent until Woo-Soo Choi took over his body. She then begins falling for the person she believes is Too-Jee but who is actually Woo-Soo. Do Do-He (also spelled Do Do-Hye or Do Do-Heh): Attractive, smart, and stubborn, she is Kang Too-Jee's landlord. She dislikes him until Woo-Soo Choi takes over his body. Chun Moo-Jin: A long-haired, pretty-boy type of character. He befriends Kang Too-Jee when they learn Hapkido together. He very quickly becomes obsolete as a fighter but remains an amusing character. He is also in love with Do Do-He (Yun-Na's close friend). Supporting characters — The Four Dragons of Goo-Ryong High School: Ji Kang-Hyuk: The leader of the Four Heavenly Dragons, the Daeil Cheonhwang of Goo-Ryong High School. He is an incredibly gifted fighter who likes to chain together aerial moves. At the beginning of the story he is the strongest fighter in Seoul. He is defeated in volume 12 by Shin Jin-Ho. After the defeat he trains for a year and learns the Corkscrew Punch in an effort to redeem himself in a rematch. He then participates in the tournament against the five gang leaders from the correctional institute; his first opponent is the savage Mah Kang-Chul. He delivers the Corkscrew Punch countless times but is still defeated by Mah Kang-Chul. After the match he apologizes to Shin Jin-Ho for breaking their promise of not losing until they fought each other. Pi Ho-Chul (boxing): The weakest of the Four Heavenly Dragons. A good-looking womanizer, he crosses paths with Woo So-Choi when he tries to hit on Yun-Na and Do Do-He. He is Japan's young amateur representative, and later, when he becomes a pro boxer, he runs for nomination for Rookie of the Year. Pi Ho-Chul is the first of the Four Dragons to face Woo So-Choi. Someone challenges him to a boxing match, thinking Woo So-Choi has the boxing abilities of the real Kang Too-Jee; he loses the match despite cheating. One year later he becomes a pro boxer and is easily defeated by Jegal Mih-Yang during the mainland hunt. Jee Dae-Woong (Judo) is the second weakest of the Four Heavenly Dragons. He is a very tall, strong man and the captain of the judo club at Goo-Ryong High School. He is defeated by Han Sang-Jin during Han's revenge on the Four Dragons for destroying his Tae Kwon Do master. During the correctional institute tournament, he is quickly defeated by Kwon Shin's palm-twisting attack. Dokko Dae-San (or Dokgo Dae-San) (Wrestling) is the one who defeated Han Sang-Jin's Tae Kwon Do master. He is the captain of the wrestling club and has great technique and strength (capable of bench pressing 300 kg). He never participates in a fight unless he has a 100% chance of winning. He defeats Han Sang-Jin in less than three minutes when the latter attempts to avenge his master. After Woo So-Choi's special underwater training, Woo So-Choi fights Dokko Dae-San and wins. A year later during the mainland hunt, Dokko Dae-San fights Jang-Suk Chung and loses badly; he ends up in the hospital with many bones broken, including his chin. Han Sang-Jin (Tae Kwon Do) is a former member of the Four Heavenly Dragons. He joined only as part of a plan to exact revenge on the four dragons for what they did to his taekwondo instructor, Ryu Nam-Jin. A natural-born talent, Han Sang-Jin prefers real-life training and frequently participates in fights. His initial goal is to defeat the man who beat his instructor. He defeats Woo So-Choi, Jee Dae-Woong, and Pi Ho-Chul, but loses to Dokko Dae-San in under three minutes.
Quantum Mistake (originally titled Change Guy) is a manhwa published in thirty-one volumes between nineteen ninety-eight and two thousand six. It was written by Son Eun-ho and illustrated by Choi Myung-su. It tells the tale of two boys, Woo-Soo Choi and Kang Too-Jee, whose souls are accidentally switched. On the day a scientist is performing body-teleportation experiments, Woo-Soo Choi and Kang Too-Jee get into a fight after Woo-Soo accidentally pulls down Kang Too-Jee's trousers. As they fight, the scientist—who had been chasing a homeless person she had tried to bribe—runs them over and uses them to test her new teleportation device. The teleportation appears successful; she dumps their bodies and drives off, only to discover the device malfunctioned. She had teleported their bodies but not their minds. When Woo-Soo wakes up, he sees his own body lying on the ground. Realizing their minds have been switched, he carries the unconscious body to the hospital. When Kang Too-Jee, now in Woo-Soo's body, wakes up, he seems to have amnesia and has reverted to a childlike state. Woo-Soo's mother believes the boy in Kang Too-Jee's body caused the incident and chases him out of the hospital. Woo-Soo Choi is able to find out where Kang Too-Jee lives and goes to school, but soon trouble appears in the form of Kang Too-Jee's old rivals and the fighting begins. Characters Woo-Soo Choi: Currently inhabits Kang Too-Jee's body. Extremely studious with exceptional concentration, he has a very kind heart but is often misunderstood because of Kang Too-Jee's personality and past. He becomes a student of the Gyuk Moo-Do martial arts school in order to fight one of his first enemies. Kang Too-Jee: Currently inhabits Woo-Soo Choi's body. Initially suffering from amnesia, he is not seen often until later volumes. He recovers his memory during a fight and confronts Woo-Soo Choi. Afterward, he stays at Woo-Soo Choi's house at Woo-Soo's request so his mother will not be upset. He becomes a Judo student—a discipline he could never previously master but can now because Woo-Soo Choi's body is weak yet extremely flexible (double-jointed). Park Yu-Na: Beautiful and popular among boys, she is afraid of gangsters and troublemakers. She always thought Kang Too-Jee was scary and violent until Woo-Soo Choi took over his body. She then begins falling for the person she believes is Too-Jee but who is actually Woo-Soo. Do Do-He (also spelled Do Do-Hye or Do Do-Heh): Attractive, smart, and stubborn, she is Kang Too-Jee's landlord. She dislikes him until Woo-Soo Choi takes over his body. Chun Moo-Jin: A long-haired, pretty-boy type of character. He befriends Kang Too-Jee when they learn Hapkido together. He very quickly becomes obsolete as a fighter but remains an amusing character. He is also in love with Do Do-He (Yun-Na's close friend). Supporting characters — The Four Dragons of Goo-Ryong High School: Ji Kang-Hyuk: The leader of the Four Heavenly Dragons, the Daeil Cheonhwang of Goo-Ryong High School. He is an incredibly gifted fighter who likes to chain together aerial moves. At the beginning of the story he is the strongest fighter in Seoul. He is defeated in volume twelve by Shin Jin-Ho. After the defeat he trains for a year and learns the Corkscrew Punch in an effort to redeem himself in a rematch. He then participates in the tournament against the five gang leaders from the correctional institute; his first opponent is the savage Mah Kang-Chul. He delivers the Corkscrew Punch countless times but is still defeated by Mah Kang-Chul. After the match he apologizes to Shin Jin-Ho for breaking their promise of not losing until they fought each other. Pi Ho-Chul (boxing): The weakest of the Four Heavenly Dragons. A good-looking womanizer, he crosses paths with Woo So-Choi when he tries to hit on Yun-Na and Do Do-He. He is Japan's young amateur representative, and later, when he becomes a pro boxer, he runs for nomination for Rookie of the Year. Pi Ho-Chul is the first of the Four Dragons to face Woo So-Choi. Someone challenges him to a boxing match, thinking Woo So-Choi has the boxing abilities of the real Kang Too-Jee; he loses the match despite cheating. One year later he becomes a pro boxer and is easily defeated by Jegal Mih-Yang during the mainland hunt. Jee Dae-Woong (Judo) is the second weakest of the Four Heavenly Dragons. He is a very tall, strong man and the captain of the judo club at Goo-Ryong High School. He is defeated by Han Sang-Jin during Han's revenge on the Four Dragons for destroying his Tae Kwon Do master. During the correctional institute tournament, he is quickly defeated by Kwon Shin's palm-twisting attack. Dokko Dae-San (or Dokgo Dae-San) (Wrestling) is the one who defeated Han Sang-Jin's Tae Kwon Do master. He is the captain of the wrestling club and has great technique and strength (capable of bench pressing three hundred kilograms). He never participates in a fight unless he has a one hundred percent chance of winning. He defeats Han Sang-Jin in less than three minutes when the latter attempts to avenge his master. After Woo So-Choi's special underwater training, Woo So-Choi fights Dokko Dae-San and wins. A year later during the mainland hunt, Dokko Dae-San fights Jang-Suk Chung and loses badly; he ends up in the hospital with many bones broken, including his chin. Han Sang-Jin (Tae Kwon Do) is a former member of the Four Heavenly Dragons. He joined only as part of a plan to exact revenge on the four dragons for what they did to his taekwondo instructor, Ryu Nam-Jin. A natural-born talent, Han Sang-Jin prefers real-life training and frequently participates in fights. His initial goal is to defeat the man who beat his instructor. He defeats Woo So-Choi, Jee Dae-Woong, and Pi Ho-Chul, but loses to Dokko Dae-San in under three minutes.
long_en_134
paper_en
2,415
en
Better in Use: Several key limitations of Qwen2 in use have been eliminated, including larger generation length (from 2K tokens to 8K tokens), better support for structured input and output, (e.g., tables and JSON), and easier tool use. In addition, Qwen2.5-Turbo supports a context length of up to 1 million tokens. Section 2 Architecture & Tokenizer Basically, the Qwen2.5 series include dense models for opensource, namely Qwen2.5-0.5B / 1.5B / 3B / 7B / 14B / 32B / 72B, and MoE models for API service, namely Qwen2.5-Turbo and Qwen2.5-Plus. Below, we provide details about the architecture of models. For dense models, we maintain the Transformer-based decoder architecture as Qwen2. The architecture incorporates several key components: Grouped Query Attention (GQA) for efficient KV cache utilization, SwiGLU activation function for non-linear activation, Rotary Positional Embeddings (RoPE) for encoding position information, QKV bias in the attention mechanism and RMSNorm with pre-normalization to ensure stable training. Building upon the dense model architectures, we extend it to MoE model architectures. This is achieved by replacing standard feed-forward network (FFN) layers with specialized MoE layers, where each layer comprises multiple FFN experts and a routing mechanism that dispatches tokens to the top-K experts. Following the approaches demonstrated in Qwen1.5-MoE, we implement fine-grained expert segmentation and shared experts routing. These architectural innovations have yielded substantial improvements in model performance across downstream tasks. For tokenization, we utilize Qwen's tokenizer, which implements byte-level byte-pair encoding (BBPE) with a vocabulary of 151,643 regular tokens. We have expanded the set of control tokens from 3 to 22 compared to previous Qwen versions, adding two new tokens for tool functionality and allocating the remainder for other model capabilities. This expansion establishes a unified vocabulary across all Qwen2.5 models, enhancing consistency and reducing potential compatibility issues. Section 3 Pre-training Our language model pre-training process consists of several key components. First, we carefully curate high-quality training data through sophisticated filtering and scoring mechanisms, combined with strategic data mixture. Second, we conduct extensive research on hyperparameter optimization to effectively train models at various scales. Finally, we incorporate specialized long-context pre-training to enhance the model's ability to process and understand extended sequences. Below, we detail our approaches to data preparation, hyperparameter selection, and long-context training. Subsection 3.1 Pre-training Data Qwen2.5 demonstrates significant enhancements in pre-training data quality compared to its predecessor Qwen2. These improvements stem from several key aspects: 1. Better data filtering. High-quality pre-training data is crucial for model performance, making data quality assessment and filtering a critical component of our pipeline. We leverage Qwen2-Instruct models as data quality filters that perform comprehensive, multi-dimensional analysis to evaluate and score training samples. The filtering method represents a significant advancement over our previous approach used for Qwen2, as it benefits from Qwen2's expanded pre-training on a larger multilingual corpus. The enhanced capabilities enable more nuanced quality assessment, resulting in both improved retention of high-quality training data and more effective filtering of low-quality samples across multiple languages. 2. Better math and code data. During the pre-training phase of Qwen2.5, we incorporate training data from Qwen2.5-Math and Qwen2.5-Coder. This data integration strategy proves highly effective, as these specialized datasets are instrumental in achieving state-of-the-art performance on mathematical and coding tasks. By leveraging these high-quality domain-specific datasets during pre-training, Qwen2.5 inherits strong capabilities in both mathematical reasoning and code generation. 3. Better synthetic data. To generate high-quality synthetic data, particularly in mathematics, code, and knowledge domains, we leverage both Qwen2-72B-Instruct and Qwen2-Math-72B-Instruct. The quality of this synthesized data is further enhanced through rigorous filtering using our proprietary general reward model and the specialized Qwen2-Math-RM-72B model. 4. Better data mixture. To optimize the pre-training data distribution, we employ Qwen2-Instruct models to classify and balance content across different domains. Our analysis revealed that domains like e-commerce, social media, and entertainment are significantly overrepresented in web-scale data, often containing repetitive, template-based, or machine-generated content. Conversely, domains such as technology, science, and academic research, while containing higher-quality information, are traditionally underrepresented. Through strategic down-sampling of overrepresented domains and up-sampling of high-value domains, we ensure a more balanced and information-rich training dataset that better serves our model's learning objectives. Building on these techniques, we have developed a larger and higher-quality pre-training dataset, expanding from the 7 trillion tokens used in Qwen2 to 18 trillion tokens. Subsection 3.2 Scaling Law for Hyper-parameters We develop scaling laws for hyper-parameter based on the pre-training data of Qwen2.5. While previous studies primarily used scaling laws to determine optimal model sizes given compute budgets, we leverage them to identify optimal hyperparameters across model architectures. Specifically, our scaling laws help determine key training parameters like batch size B and learning rate mu for both dense models and MoE models of varying sizes. Through extensive experimentation, we systematically study the relationship between model architecture and optimal training hyper-parameters. Specifically, we analyze how the optimal learning rate and batch size vary with model size N and pre-training data size D. Our experiments cover a comprehensive range of architectures, including dense models with 44M to 14B parameters and MoE models with 44M to 1B activated parameters, trained on datasets ranging from 0.8B to 600B tokens. Using these optimal hyper-parameter predictions, we then model the final loss as a function of model architecture and training data scale. Additionally, we leverage scaling laws to predict and compare the performance of MoE models with varying parameter counts against their dense counterparts. This analysis guides our hyper-parameter configuration for MoE models, enabling us to achieve performance parity with specific dense model variants (such as Qwen2.5-72B and Qwen2.5-14B) through careful tuning of both activated and total parameters. Subsection 3.3 Long-context Pre-training For optimal training efficiency, Qwen2.5 employs a two-phase pre-training approach: an initial phase with a 4,096-token context length, followed by an extension phase for longer sequences. Following the strategy used in Qwen2, we extend the context length from 4,096 to 32,768 tokens during the final pre-training stage for all model variants except Qwen2.5-Turbo. Concurrently, we increase the base frequency of RoPE from 10,000 to 1,000,000 using the ABF technique. For Qwen2.5-Turbo, we implement a progressive context length expansion strategy during training, advancing through four stages: 32,768 tokens, 65,536 tokens, 131,072 tokens, and ultimately 262,144 tokens, with a RoPE base frequency of 10,000,000. At each stage, we carefully curate the training data to include 40% sequences at the current maximum length and 60% shorter sequences. This progressive training methodology enables smooth adaptation to increasing context lengths while maintaining the model's ability to effectively process and generalize across sequences of varying lengths. To enhance our models' ability to process longer sequences during inference, we implement two key strategies: YARN and Dual Chunk Attention (DCA). Through these innovations, we achieve a four-fold increase in sequence length capacity, enabling Qwen2.5-Turbo to handle up to 1 million tokens and other models to process up to 131,072 tokens. Notably, these approaches not only improve the modeling of long sequences by reducing perplexity but also maintain the models' strong performance on shorter sequences, ensuring consistent quality across varying input lengths. Section 4 Post-training Qwen 2.5 introduces two significant advancements in its post-training design compared to Qwen 2: 1. Expanded Supervised Fine-tuning Data Coverage: The supervised fine-tuning process leverages a massive dataset comprising millions of high-quality examples. This expansion specifically addresses key areas where the previous model showed limitations, such as long-sequence generation, mathematical problem-solving, coding, instruction-following, structured data understanding, logical reasoning, cross-lingual transfer, and robust system instruction. 2. Two-stage Reinforcement Learning: The reinforcement learning (RL) process in Qwen 2.5 is divided into two distinct stages: Offline RL and Online RL. Offline RL: This stage focuses on developing capabilities that are challenging for the reward model to evaluate, such as reasoning, factuality, and instruction-following. Through meticulous construction and validation of training data, we ensure that the Offline RL signals are both learnable and reliable, enabling the model to acquire those complex skills effectively. Online RL: The Online RL phase leverages the reward model's ability to detect nuances in output quality, including truthfulness, helpfulness, conciseness, relevance, harmlessness and debiasing. It enables the model to generate responses that are precise, coherent, and well-structured while maintaining safety and readability. As a result, the model's outputs consistently meet human quality standards and expectations. Subsection 4.1 Supervised Fine-tuning In this section, we detail the key enhancements made during the SFT phase of Qwen2.5, focusing on several critical areas: 1. Long-sequence Generation: Qwen2.5 is capable of generating high-quality content with an output context length of up to 8,192 tokens, a significant advancement over the typical post-training response length, which often remains under 2,000 tokens. To address this gap, we develop long-response datasets. We employ back-translation techniques to generate queries for long-text data from pre-training corpora, impose output length constraints, and use Qwen2 to filter out low-quality paired data. 2. Mathematics: We introduce the chain-of-thought data of Qwen2.5-Math, which encompasses a diverse range of query sources, including public datasets, K-12 problem collections, and synthetic problems. To ensure high-quality reasoning, we employ rejection sampling along with reward modeling and annotated answers for guidance, producing step-by-step reasoning process. 3. Coding: To enhance coding capabilities, we incorporate the instruction tuning data of Qwen2.5-Coder. We use multiple language-specific agents into a collaborative framework, generating diverse and high-quality instruction pairs across nearly 40 programming languages. We expand our instruction dataset by synthesizing new examples from code-related Q&A websites and gathering algorithmic code snippets from GitHub. A comprehensive multilingual sandbox is used to perform static code checking and validate code snippets through automated unit testing, ensuring code quality and correctness. 4. Instruction-following: To ensure high-quality instruction-following data, we implement a rigorous code-based validation framework. In this approach, LLMs generate both instructions and corresponding verification code, along with comprehensive unit tests for cross-validation. Through execution feedback-based rejection sampling, we carefully curate the training data used for Supervised Fine-Tuning, thereby guaranteeing the model's faithful adherence to intended instructions. 5. Structured Data Understanding: We develop a comprehensive structured understanding dataset that encompasses both traditional tasks, such as tabular question-answering, fact verification, error correction, and structural understanding, as well as complex tasks involving structured and semi-structured data. By incorporating reasoning chains into the model's responses, we significantly enhance its ability to infer information from structured data, thereby improving its performance across these diverse tasks. This approach not only broadens the scope of the dataset but also deepens the model's capacity to reason and derive meaningful insights from complex data structures. 6. Logical Reasoning: To enhance the model's logical reasoning capabilities, we introduce a diverse set of 70,000 new queries spanning various domains. These queries encompass multiple-choice questions, true / false questions, and open-ended questions. The model is trained to approach problems systematically, employing a range of reasoning methods such as deductive reasoning, inductive generalization, analogical reasoning, causal reasoning, and statistical reasoning. Through iterative refinement, we systematically filter out data containing incorrect answers or flawed reasoning processes. This process progressively strengthens the model's ability to reason logically and accurately, ensuring robust performance across different types of reasoning tasks. 7. Cross-Lingual Transfer: To facilitate the transfer of the model's general capabilities across languages, we employ a translation model to convert instructions from high-resource languages into various low-resource languages, thereby generating corresponding response candidates. To ensure the accuracy and consistency of these responses, we evaluate the semantic alignment between each multilingual response and its original counterpart. This process preserves the logical structure and stylistic nuances of the original responses, thereby maintaining their integrity and coherence across different languages. 8. Robust System Instruction: We construct hundreds of general system prompts to improve the diversity of system prompts in post-training, ensuring consistency between system prompts and conversations. Evaluations with different system prompts show that the model maintains good performance and reduced variance, indicating improved robustness. 9. Response Filtering: To evaluate the quality of responses, we employ multiple automatic annotation methods, including a dedicated critic model and a multi-agent collaborative scoring system. Responses are subjected to rigorous assessment, and only those deem flawless by all scoring systems are retained. This comprehensive approach ensures that our outputs maintain the highest quality standards. Ultimately, we construct a dataset of over 1 million SFT examples. The model is fine-tuned for two epochs with a sequence length of 32,768 tokens. To optimize learning, the learning rate is gradually decreased from 7 times 10 to the power of negative 6 to 7 times 10 to the power of negative 7. To address overfitting, we apply a weight decay of 0.1, and gradient norms are clipped at a maximum value of 1.0. Subsection 4.2 Offline Reinforcement Learning Compared to Online Reinforcement Learning (RL), Offline RL enables the pre-preparation of training signals, which is particularly advantageous for tasks where standard answers exist but are challenging to evaluate using reward models. In this study, we focus on objective query domains such as mathematics, coding, instruction following, and logical reasoning, where obtaining accurate evaluations can be complex. In the previous phase, we extensively employ strategies like execution feedback and answer matching to ensure the quality of responses. For the current phase, we reuse that pipeline, employing the SFT model to resample responses for a new set of queries. Responses that pass our quality checks are used as positive examples, while those that fail are treated as negative examples for Direct Preference Optimization (DPO) training. To further enhance the reliability and accuracy of the training signals, we make use of both human and automated review processes.
Better in Use: Several key limitations of Qwen two in use have been eliminated, including larger generation length (from two K tokens to eight K tokens), better support for structured input and output, (e.g., tables and JSON), and easier tool use. In addition, Qwen two point five-Turbo supports a context length of up to one million tokens. Section two Architecture & Tokenizer Basically, the Qwen two point five series include dense models for opensource, namely Qwen two point five- zero point five B / one point five B / three B / seven B / fourteen B / thirty two B / seventy two B, and MoE models for API service, namely Qwen two point five-Turbo and Qwen two point five-Plus. Below, we provide details about the architecture of models. For dense models, we maintain the Transformer-based decoder architecture as Qwen two. The architecture incorporates several key components: Grouped Query Attention (GQA) for efficient KV cache utilization, SwiGLU activation function for non-linear activation, Rotary Positional Embeddings (RoPE) for encoding position information, QKV bias in the attention mechanism and RMSNorm with pre-normalization to ensure stable training. Building upon the dense model architectures, we extend it to MoE model architectures. This is achieved by replacing standard feed-forward network (FFN) layers with specialized MoE layers, where each layer comprises multiple FFN experts and a routing mechanism that dispatches tokens to the top K experts. Following the approaches demonstrated in Qwen one point five-MoE, we implement fine-grained expert segmentation and shared experts routing. These architectural innovations have yielded substantial improvements in model performance across downstream tasks. For tokenization, we utilize Qwen's tokenizer, which implements byte-level byte-pair encoding (BBPE) with a vocabulary of one hundred fifty-one thousand six hundred forty-three regular tokens. We have expanded the set of control tokens from three to twenty-two compared to previous Qwen versions, adding two new tokens for tool functionality and allocating the remainder for other model capabilities. This expansion establishes a unified vocabulary across all Qwen two point five models, enhancing consistency and reducing potential compatibility issues. Section three Pre-training Our language model pre-training process consists of several key components. First, we carefully curate high-quality training data through sophisticated filtering and scoring mechanisms, combined with strategic data mixture. Second, we conduct extensive research on hyperparameter optimization to effectively train models at various scales. Finally, we incorporate specialized long-context pre-training to enhance the model's ability to process and understand extended sequences. Below, we detail our approaches to data preparation, hyperparameter selection, and long-context training. Subsection three point one Pre-training Data Qwen two point five demonstrates significant enhancements in pre-training data quality compared to its predecessor Qwen two. These improvements stem from several key aspects: 1. Better data filtering. High-quality pre-training data is crucial for model performance, making data quality assessment and filtering a critical component of our pipeline. We leverage Qwen two Instruct models as data quality filters that perform comprehensive, multi-dimensional analysis to evaluate and score training samples. The filtering method represents a significant advancement over our previous approach used for Qwen two, as it benefits from Qwen two's expanded pre-training on a larger multilingual corpus. The enhanced capabilities enable more nuanced quality assessment, resulting in both improved retention of high-quality training data and more effective filtering of low-quality samples across multiple languages. two. Better math and code data. During the pre-training phase of Qwen two point five, we incorporate training data from Qwen two point five Math and Qwen two point five Coder. This data integration strategy proves highly effective, as these specialized datasets are instrumental in achieving state-of-the-art performance on mathematical and coding tasks. By leveraging these high-quality domain-specific datasets during pre-training, Qwen two point five inherits strong capabilities in both mathematical reasoning and code generation. three. Better synthetic data. To generate high-quality synthetic data, particularly in mathematics, code, and knowledge domains, we leverage both Qwen two seventy two B Instruct and Qwen two Math seventy two B Instruct. The quality of this synthesized data is further enhanced through rigorous filtering using our proprietary general reward model and the specialized Qwen two Math RM seventy two B model. four. Better data mixture. To optimize the pre-training data distribution, we employ Qwen two Instruct models to classify and balance content across different domains. Our analysis revealed that domains like e-commerce, social media, and entertainment are significantly overrepresented in web-scale data, often containing repetitive, template-based, or machine-generated content. Conversely, domains such as technology, science, and academic research, while containing higher-quality information, are traditionally underrepresented. Through strategic down-sampling of overrepresented domains and up-sampling of high-value domains, we ensure a more balanced and information-rich training dataset that better serves our model's learning objectives. Building on these techniques, we have developed a larger and higher-quality pre-training dataset, expanding from the seven trillion tokens used in Qwen two to eighteen trillion tokens. Subsection three point two Scaling Law for Hyper-parameters We develop scaling laws for hyper-parameter based on the pre-training data of Qwen two point five. While previous studies primarily used scaling laws to determine optimal model sizes given compute budgets, we leverage them to identify optimal hyperparameters across model architectures. Specifically, our scaling laws help determine key training parameters like batch size B and learning rate mu for both dense models and MoE models of varying sizes. Through extensive experimentation, we systematically study the relationship between model architecture and optimal training hyper-parameters. Specifically, we analyze how the optimal learning rate and batch size vary with model size N and pre-training data size D. Our experiments cover a comprehensive range of architectures, including dense models with forty four M to fourteen B parameters and MoE models with forty four M to one B activated parameters, trained on datasets ranging from zero point eight B to six hundred B tokens. Using these optimal hyper-parameter predictions, we then model the final loss as a function of model architecture and training data scale. Additionally, we leverage scaling laws to predict and compare the performance of MoE models with varying parameter counts against their dense counterparts. This analysis guides our hyper-parameter configuration for MoE models, enabling us to achieve performance parity with specific dense model variants (such as Qwen two point five-seventy two B and Qwen two point five-fourteen B) through careful tuning of both activated and total parameters. Subsection three point three Long-context Pre-training For optimal training efficiency, Qwen two point five employs a two-phase pre-training approach: an initial phase with a four thousand ninety six-token context length, followed by an extension phase for longer sequences. Following the strategy used in Qwen two, we extend the context length from four thousand ninety six to thirty two thousand seven hundred sixty eight tokens during the final pre-training stage for all model variants except Qwen two point five-Turbo. Concurrently, we increase the base frequency of RoPE from ten thousand to one million using the ABF technique. For Qwen two point five-Turbo, we implement a progressive context length expansion strategy during training, advancing through four stages: thirty two thousand seven hundred sixty eight tokens, sixty five thousand five hundred thirty six tokens, one hundred thirty one thousand seventy two tokens, and ultimately two hundred sixty two thousand one hundred forty four tokens, with a RoPE base frequency of ten million. At each stage, we carefully curate the training data to include forty percent sequences at the current maximum length and sixty percent shorter sequences. This progressive training methodology enables smooth adaptation to increasing context lengths while maintaining the model's ability to effectively process and generalize across sequences of varying lengths. To enhance our models' ability to process longer sequences during inference, we implement two key strategies: YARN and Dual Chunk Attention (DCA). Through these innovations, we achieve a four-fold increase in sequence length capacity, enabling Qwen two point five-Turbo to handle up to one million tokens and other models to process up to one hundred thirty one thousand seventy two tokens. Notably, these approaches not only improve the modeling of long sequences by reducing perplexity but also maintain the models' strong performance on shorter sequences, ensuring consistent quality across varying input lengths. Section four Post-training Qwen two point five introduces two significant advancements in its post-training design compared to Qwen two: one. Expanded Supervised Fine-tuning Data Coverage: The supervised fine-tuning process leverages a massive dataset comprising millions of high-quality examples. This expansion specifically addresses key areas where the previous model showed limitations, such as long-sequence generation, mathematical problem-solving, coding, instruction-following, structured data understanding, logical reasoning, cross-lingual transfer, and robust system instruction. two. Two-stage Reinforcement Learning: The reinforcement learning (RL) process in Qwen two point five is divided into two distinct stages: Offline RL and Online RL. Offline RL: This stage focuses on developing capabilities that are challenging for the reward model to evaluate, such as reasoning, factuality, and instruction-following. Through meticulous construction and validation of training data, we ensure that the Offline RL signals are both learnable and reliable, enabling the model to acquire those complex skills effectively. Online RL: The Online RL phase leverages the reward model's ability to detect nuances in output quality, including truthfulness, helpfulness, conciseness, relevance, harmlessness and debiasing. It enables the model to generate responses that are precise, coherent, and well-structured while maintaining safety and readability. As a result, the model's outputs consistently meet human quality standards and expectations. Subsection four point one Supervised Fine tuning In this section, we detail the key enhancements made during the SFT phase of Qwen two point five, focusing on several critical areas: one. Long sequence Generation: Qwen two point five is capable of generating high quality content with an output context length of up to eight thousand one hundred ninety two tokens, a significant advancement over the typical post training response length, which often remains under two thousand tokens. To address this gap, we develop long response datasets. We employ back translation techniques to generate queries for long text data from pre training corpora, impose output length constraints, and use Qwen two to filter out low quality paired data. two. Mathematics: We introduce the chain of thought data of Qwen two point five Math, which encompasses a diverse range of query sources, including public datasets, K twelve problem collections, and synthetic problems. To ensure high quality reasoning, we employ rejection sampling along with reward modeling and annotated answers for guidance, producing step by step reasoning process. three. Coding: To enhance coding capabilities, we incorporate the instruction tuning data of Qwen two point five Coder. We use multiple language specific agents into a collaborative framework, generating diverse and high quality instruction pairs across nearly forty programming languages. We expand our instruction dataset by synthesizing new examples from code-related Q and A websites and gathering algorithmic code snippets from GitHub. A comprehensive multilingual sandbox is used to perform static code checking and validate code snippets through automated unit testing, ensuring code quality and correctness. four. Instruction-following: To ensure high-quality instruction-following data, we implement a rigorous code-based validation framework. In this approach, LLMs generate both instructions and corresponding verification code, along with comprehensive unit tests for cross-validation. Through execution feedback-based rejection sampling, we carefully curate the training data used for Supervised Fine-Tuning, thereby guaranteeing the model's faithful adherence to intended instructions. five. Structured Data Understanding: We develop a comprehensive structured understanding dataset that encompasses both traditional tasks, such as tabular question-answering, fact verification, error correction, and structural understanding, as well as complex tasks involving structured and semi-structured data. By incorporating reasoning chains into the model's responses, we significantly enhance its ability to infer information from structured data, thereby improving its performance across these diverse tasks. This approach not only broadens the scope of the dataset but also deepens the model's capacity to reason and derive meaningful insights from complex data structures. six. Logical Reasoning: To enhance the model's logical reasoning capabilities, we introduce a diverse set of seventy thousand new queries spanning various domains. These queries encompass multiple-choice questions, true / false questions, and open-ended questions. The model is trained to approach problems systematically, employing a range of reasoning methods such as deductive reasoning, inductive generalization, analogical reasoning, causal reasoning, and statistical reasoning. Through iterative refinement, we systematically filter out data containing incorrect answers or flawed reasoning processes. This process progressively strengthens the model's ability to reason logically and accurately, ensuring robust performance across different types of reasoning tasks. seven. Cross-Lingual Transfer: To facilitate the transfer of the model's general capabilities across languages, we employ a translation model to convert instructions from high-resource languages into various low-resource languages, thereby generating corresponding response candidates. To ensure the accuracy and consistency of these responses, we evaluate the semantic alignment between each multilingual response and its original counterpart. This process preserves the logical structure and stylistic nuances of the original responses, thereby maintaining their integrity and coherence across different languages. eight. Robust System Instruction: We construct hundreds of general system prompts to improve the diversity of system prompts in post-training, ensuring consistency between system prompts and conversations. Evaluations with different system prompts show that the model maintains good performance and reduced variance, indicating improved robustness. nine. Response Filtering: To evaluate the quality of responses, we employ multiple automatic annotation methods, including a dedicated critic model and a multi-agent collaborative scoring system. Responses are subjected to rigorous assessment, and only those deem flawless by all scoring systems are retained. This comprehensive approach ensures that our outputs maintain the highest quality standards. Ultimately, we construct a dataset of over one million SFT examples. The model is fine-tuned for two epochs with a sequence length of thirty two thousand seven hundred sixty eight tokens. To optimize learning, the learning rate is gradually decreased from seven times ten to the power of negative six to seven times ten to the power of negative seven. To address overfitting, we apply a weight decay of zero point one, and gradient norms are clipped at a maximum value of one point zero. Subsection four point two Offline Reinforcement Learning Compared to Online Reinforcement Learning (RL), Offline RL enables the pre-preparation of training signals, which is particularly advantageous for tasks where standard answers exist but are challenging to evaluate using reward models. In this study, we focus on objective query domains such as mathematics, coding, instruction following, and logical reasoning, where obtaining accurate evaluations can be complex. In the previous phase, we extensively employ strategies like execution feedback and answer matching to ensure the quality of responses. For the current phase, we reuse that pipeline, employing the SFT model to resample responses for a new set of queries. Responses that pass our quality checks are used as positive examples, while those that fail are treated as negative examples for Direct Preference Optimization (DPO) training. To further enhance the reliability and accuracy of the training signals, we make use of both human and automated review processes.
long_en_343
poet_en
668
en
It was spring. The sun was shining and the birds were chirping. I was spending time with my older brother, Damon, in the backyard near the woods. Mom was somewhere in the house and Dad was on duty as a policeman. Damon and I usually played in the woods; nothing bad had ever happened to us there. We often climbed into the treehouse Dad had built for us when we were younger and just spent time up there. I'd always look out for animals and unusual things in the woods. One time I found a shoe sitting there. It was bright red, the laces slightly worn, with a bit of dried mud encrusted around the sole. I was so excited that I went over, picked it up, and took it into the treehouse. I treated that shoe as though it were a prize. Damon, on the other hand, would just sit with his sketchbook and pencil and draw for a long time. Most of his sketches were of birds and the occasional fox. He was good. Every now and again we'd get into little debates, generally about nothing. Even though our thoughts were sometimes very different, we were the best of friends. I could trust him with anything. Afterward, he spent the rest of the time sketching to relieve some stress that had been on his mind. Mom wanted to ask questions about it, but she kept quiet; she could see he was very tired after hours of searching and having nothing to show for it. Dad was always determined to do anything he could in missing-person cases. Later that night, Damon and I went upstairs to our room and got ready for bed. I climbed into my bunk and tried to sleep. I slept for a couple of hours, only to be woken by a loud sound. I sat up and looked at the clock: it was 1:34 a.m. I tried to focus on what I was hearing; to me it sounded like something dragging, with twigs snapping, as if someone were walking back in the woods. I tried to reassure myself that it was just a wolf dragging a fawn, but something was off—wolves don't make sounds that loud. After a few minutes the dragging faded into the distance, but no matter how hard I tried, I couldn't get back to sleep. Morning rolled around, and I still hadn't slept because of what I'd heard. It was a good thing it was a four-day weekend. When Damon woke up, I asked if he'd heard the noise. He looked at me and said, "I guess not." He shrugged and got ready for the day. As soon as we were dressed, we went downstairs and found Dad getting ready to head out. He grabbed his keys and his jacket from the wall hooks. Before he left the house, he hugged both Damon and me and kissed Mom. When we got out there, he headed right up into the tree house. I decided to stay down for a minute. He popped his head out the window and said, "Alright, just don't get hurt," then began sketching. I walked around more cautiously because of the sounds; it was still dark. I went toward a small clearing not far from the tree house. There was a fallen tree that had been there long before we moved into this house; it was covered in moss and a couple of vines. Damon could usually sit on the log and get close to the birds, sometimes even close enough to touch them. I was looking a little deeper, past the clearing, when I saw something odd. I jumped down a short ledge and went over to get a better look. It was a scrap of cloth, light blue with dark red on part of it. I wanted to keep it secret, but I knew I should probably tell Dad about it.
It was spring. The sun was shining and the birds were chirping. I was spending time with my older brother, Damon, in the backyard near the woods. Mom was somewhere in the house and Dad was on duty as a policeman. Damon and I usually played in the woods; nothing bad had ever happened to us there. We often climbed into the treehouse Dad had built for us when we were younger and just spent time up there. I'd always look out for animals and unusual things in the woods. One time I found a shoe sitting there. It was bright red, the laces slightly worn, with a bit of dried mud encrusted around the sole. I was so excited that I went over, picked it up, and took it into the treehouse. I treated that shoe as though it were a prize. Damon, on the other hand, would just sit with his sketchbook and pencil and draw for a long time. Most of his sketches were of birds and the occasional fox. He was good. Every now and again we'd get into little debates, generally about nothing. Even though our thoughts were sometimes very different, we were the best of friends. I could trust him with anything. Afterward, he spent the rest of the time sketching to relieve some stress that had been on his mind. Mom wanted to ask questions about it, but she kept quiet; she could see he was very tired after hours of searching and having nothing to show for it. Dad was always determined to do anything he could in missing-person cases. Later that night, Damon and I went upstairs to our room and got ready for bed. I climbed into my bunk and tried to sleep. I slept for a couple of hours, only to be woken by a loud sound. I sat up and looked at the clock: it was one thirty four a.m. I tried to focus on what I was hearing; to me it sounded like something dragging, with twigs snapping, as if someone were walking back in the woods. I tried to reassure myself that it was just a wolf dragging a fawn, but something was off—wolves don't make sounds that loud. After a few minutes the dragging faded into the distance, but no matter how hard I tried, I couldn't get back to sleep. Morning rolled around, and I still hadn't slept because of what I'd heard. It was a good thing it was a four-day weekend. When Damon woke up, I asked if he'd heard the noise. He looked at me and said, "I guess not." He shrugged and got ready for the day. As soon as we were dressed, we went downstairs and found Dad getting ready to head out. He grabbed his keys and his jacket from the wall hooks. Before he left the house, he hugged both Damon and me and kissed Mom. When we got out there, he headed right up into the tree house. I decided to stay down for a minute. He popped his head out the window and said, "Alright, just don't get hurt," then began sketching. I walked around more cautiously because of the sounds; it was still dark. I went toward a small clearing not far from the tree house. There was a fallen tree that had been there long before we moved into this house; it was covered in moss and a couple of vines. Damon could usually sit on the log and get close to the birds, sometimes even close enough to touch them. I was looking a little deeper, past the clearing, when I saw something odd. I jumped down a short ledge and went over to get a better look. It was a scrap of cloth, light blue with dark red on part of it. I wanted to keep it secret, but I knew I should probably tell Dad about it.
long_en_341
poet_en
841
en
"Mama, mama, mama," the noisy little bundle of joy kept calling. The mother and son were on their way to the administration office; the boy had just turned five and needed to be registered as her son and as a citizen. They arrived at the capital with the child chewing blackberries, his face stained red. She bent down and wiped the juice from his hands and mouth on the inside of her pink skirt. "Yue-er, why must you be so messy? Didn't I teach you to eat without making a mess?" she scolded. "You did," Xinxi replied in a childish voice. "Then why are you still so messy?" "Mama, it's not me!" "Then is it the fruit's fault?" she asked. Xinxi frowned, trying to process the complicated question. After thinking hard for a moment, encouraged by his mother's patience, he said, "I'll eat one fruit at a time." "Ho ho, the young master understands," she said approvingly. Nodding, she took his hand and led him into the administration office. After paying the fees and successfully registering Xinxi, the two of them stepped out of the building after two hours and forty-five minutes. As they walked down the road, something caught her eye. She tightened her grip on Xinxi's hand and turned to see what it was. He looked up at his mother, about to ask what was wrong, when he was picked up and they entered a shop. Holding him tightly against her chest, she hid to the side and looked out the window. When he turned his head toward it, he saw five men on white horses. Four wore black robes with characters embroidered in red silk on their backs; the fifth, a young man, wore a lilac robe and gold jewels in his hair and ears. Once they were out of sight, Yue Mian's grip on her son softened. He reached up, placed his hand on her cheek, and asked, "Mama, what's wrong?" Her gaze softened as she turned to him, and she saw she had startled the customers and workers in the store. Bowing slightly to them, the pair slipped out, took a few back alleys, and entered the forest. Taking the back streets to avoid prying eyes had lengthened the route home. She picked Xinxi up and put him on her back so as not to strain his little legs, and the road home was silent. Xinxi wanted to ask what had frightened her, but he was scared himself; he rested his head against her back and closed his eyes. Yue Mian's mind wandered as she gave a bitter smile at her own foolishness—she had been so absorbed in a carefree life that she had forgotten her past. She would have forgotten it if she hadn't seen that familiar face—the one who had pledged his life to her—and suddenly remembered the promise to her mother. She had vowed to send letters whenever she could, but those letters were never written; the memories she cherished had been eclipsed by the one she loved, Ye Xinxi Yue. Looking back at the bundle of joy asleep on her back, she couldn't help but smile—he truly lived up to his name. She was almost terrified that Xinxi had made her forget that he was, in truth, her heart's joy. When they arrived home, she placed him on the bed. Night fell, and Xinxi was sent to sleep with the others. Yue Mian sat at the table under candlelight with five sheets of paper, an inkstone, and a brush. Dipping the brush in ink, she moved it over the first sheet and began to write. Aunty Fei stood by, ready to blot the ink with a sheet at her side. She had known this day would come, but had not expected it so soon. Aunty Fei was fond of Yue Mian and often said that if she could give birth she would want a daughter like her. The sad part was that Yue Mian would be leaving her joy behind and becoming the person she had been five years ago; this pained her deeply. Aunty Fei asked, "YuMi, will Yue-er lose his mother once more?" The brush paused in her hand, then started once more. Answering her question, she said, "When I do leave, I will be leaving my son here. It would be troublesome to have someone with me who will hold me back, and Xinxi is my weakness. No one is to know that he exists, because if I must lose him, I will break. My Xinxi knows that he has a mother, and I will definitely keep myself alive until the day he marries the one he loves," her voice calm and determined. "Then I will do my utmost and accept my responsibility. I will look after Her Highness's joy and raise him to be a fine man, able to stand by your side with pride," Fei Rou pledged.
"Mama, mama, mama," the noisy little bundle of joy kept calling. The mother and son were on their way to the administration office; the boy had just turned five and needed to be registered as her son and as a citizen. They arrived at the capital with the child chewing blackberries, his face stained red. She bent down and wiped the juice from his hands and mouth on the inside of her pink skirt. "Yue-er, why must you be so messy? Didn't I teach you to eat without making a mess?" she scolded. "You did," Xinxi replied in a childish voice. "Then why are you still so messy?" "Mama, it's not me!" "Then is it the fruit's fault?" she asked. Xinxi frowned, trying to process the complicated question. After thinking hard for a moment, encouraged by his mother's patience, he said, "I'll eat one fruit at a time." "Ho ho, the young master understands," she said approvingly. Nodding, she took his hand and led him into the administration office. After paying the fees and successfully registering Xinxi, the two of them stepped out of the building after two hours and forty-five minutes. As they walked down the road, something caught her eye. She tightened her grip on Xinxi's hand and turned to see what it was. He looked up at his mother, about to ask what was wrong, when he was picked up and they entered a shop. Holding him tightly against her chest, she hid to the side and looked out the window. When he turned his head toward it, he saw five men on white horses. Four wore black robes with characters embroidered in red silk on their backs; the fifth, a young man, wore a lilac robe and gold jewels in his hair and ears. Once they were out of sight, Yue Mian's grip on her son softened. He reached up, placed his hand on her cheek, and asked, "Mama, what's wrong?" Her gaze softened as she turned to him, and she saw she had startled the customers and workers in the store. Bowing slightly to them, the pair slipped out, took a few back alleys, and entered the forest. Taking the back streets to avoid prying eyes had lengthened the route home. She picked Xinxi up and put him on her back so as not to strain his little legs, and the road home was silent. Xinxi wanted to ask what had frightened her, but he was scared himself; he rested his head against her back and closed his eyes. Yue Mian's mind wandered as she gave a bitter smile at her own foolishness—she had been so absorbed in a carefree life that she had forgotten her past. She would have forgotten it if she hadn't seen that familiar face—the one who had pledged his life to her—and suddenly remembered the promise to her mother. She had vowed to send letters whenever she could, but those letters were never written; the memories she cherished had been eclipsed by the one she loved, Ye Xinxi Yue. Looking back at the bundle of joy asleep on her back, she couldn't help but smile—he truly lived up to his name. She was almost terrified that Xinxi had made her forget that he was, in truth, her heart's joy. When they arrived home, she placed him on the bed. Night fell, and Xinxi was sent to sleep with the others. Yue Mian sat at the table under candlelight with five sheets of paper, an inkstone, and a brush. Dipping the brush in ink, she moved it over the first sheet and began to write. Aunty Fei stood by, ready to blot the ink with a sheet at her side. She had known this day would come, but had not expected it so soon. Aunty Fei was fond of Yue Mian and often said that if she could give birth she would want a daughter like her. The sad part was that Yue Mian would be leaving her joy behind and becoming the person she had been five years ago; this pained her deeply. Aunty Fei asked, "YuMi, will Yue-er lose his mother once more?" The brush paused in her hand, then started once more. Answering her question, she said, "When I do leave, I will be leaving my son here. It would be troublesome to have someone with me who will hold me back, and Xinxi is my weakness. No one is to know that he exists, because if I must lose him, I will break. My Xinxi knows that he has a mother, and I will definitely keep myself alive until the day he marries the one he loves," her voice calm and determined. "Then I will do my utmost and accept my responsibility. I will look after Her Highness's joy and raise him to be a fine man, able to stand by your side with pride," Fei Rou pledged.
long_en_271
wiki_en
1,008
en
Science Week Ireland is an annual, week-long event held each November that celebrates science in everyday life. It is an initiative of Science Foundation Ireland (SFI) and is the largest science festival in the country, engaging tens of thousands of people in workshops, science shows, talks, laboratory demonstrations, science walks, and other science-related events. Science Week is a collaboration involving industry, colleges, schools, libraries, teachers, researchers, and students throughout Ireland. It supports Science Foundation Ireland’s mission to catalyse, inspire, and guide the best in science, technology, engineering, and maths (STEM) education and public engagement. The ultimate aim of this effort is for Ireland to have the most engaged and scientifically informed public by 2020, as outlined in Science Foundation Ireland’s strategy, Agenda 2020. This also aligns with the national science innovation strategy, Innovation 2020. History: Over the years, Science Week Ireland has grown from a small pilot initiative to a large promotional and event engine and is now a recognised vehicle for regional activity supported by a national promotional campaign. In 1995, a National Science Week was organised by the Royal Dublin Society and a number of other organisations to inform the general public about science. The first Science Week organised by Forfás was held in 1996. It was run by Forfás on behalf of the Office of Science and Technology at the Department of Jobs, Enterprise and Innovation under the name 'Information Technology and Science Week.' The week aimed to raise general awareness of the benefits of science and information technology to people of all ages. Professional bodies, voluntary groups, colleges, businesses and the public sector combined to organise events countrywide, including conferences, lectures, interactive exhibitions, debates and competitions for primary school students. In 1996 Forfás organised the first Information Technology and Science Week, which began on 25 November. In 1997 it was renamed Science Week and ran from 10 to 16 November; it was launched by Minister Noel Treacy in Galway. That year about 50 events were held nationwide, including answering scientific questions for schoolchildren and a Speakathon organised by the Irish Research Scientists' Association. Science Week 1998 ran from 1 to 8 November and featured talks in public libraries and another Speakathon. SFI took over Science Week from the Forfás Discover Science and Engineering programme in 2012. Science Week has since continued to grow into a week-long celebration of STEM public engagement, enhancing public interest in STEM and helping people see its relevance to their daily lives. Forfás sought feedback on the running of Science Week and it was also externally evaluated. 2007 Science Week took place between 11–18 November 2007 and the theme was 'Surrounded by Science'. The programme illustrated that behind the everyday objects in our lives is a great inventor, scientist or engineer. Lectures featured Craig Johnston, inventor of the Adidas Predator; Joe F. Edwards Jr., former NASA astronaut; and Dr Sheila Willis, Director of the Forensic Science Laboratory. 2007 was the eleventh year of Science Week and saw an estimated 95,000 people attend lectures, exhibitions and workshops throughout the country. The 2008 Science Week took place between 11–16 November 2008. The theme was 'Science – Shaping Our World', celebrating the International Year of Planet Earth. Guest lecturers included Professor Aubrey Manning, distinguished zoologist and broadcaster; Gerry Johnston, director of Special Effects Ireland; Dr Cynthia Breazeal, Associate Professor at the Massachusetts Institute of Technology; Stephen Attenborough of Virgin Galactic; and Patrick Collison, Irish Young Scientist of the Year 2005. These talks can be viewed on YouTube. Science Week 2009 took place between 8–15 November 2009. The theme was 'Science – Inspiring Creativity and Innovation', linking to the European Year of Creativity and Innovation. In the summer of 2009, DSE launched a Twitter account for the latest news on Science Week. A lecture series included speakers from the Tyndall National Institute, Cork, and Sustainable Energy Ireland; these can be viewed on YouTube. Science Week 2010 ran from 7–14 November. Its theme was "Our Place in Space", which explored the latest happenings in astronomy, Ireland's role in the space industry, and the vital role played by science, technology, engineering and mathematics (STEM) in helping us make sense of our universe. Science Week 2011 ran from 13–20 November. The theme was "The Chemistry of Life", demonstrating the importance of chemistry to our everyday lives, from the atoms that are the building blocks of nature to the chemistry that creates lasting bonds between people. Science Week 2012 ran from 11–18 November. The theme was "Everyday Experimenting", highlighting how we are all involved in science every day, carrying out scientific processes and experimenting even when not aware of it. Science Week 2013 ran from 10–17 November. The theme was "Exploring the XTRA-Ordinary", which called on the public to go behind the scenes of everyday life and explore the extraordinary processes taking place in front of our eyes. In 2014 an estimated 250,000 people took part in science festivals, demonstrations, seminars and tours across the country during the 19th annual national Science Week, which took place from 9–16 November 2014. The theme was 'Power of Science.' Over 800 events took place across Ireland, including science festivals in Sligo, Galway, Mayo, Dublin, Cork, Waterford and the Midlands, aiming to "entertain, educate and enthral young and old alike with the power of science." Jamie Heaslip acted as a Science Week ambassador. 2015 marked the 20th anniversary of Science Week, which took place from 8–15 November. The theme was 'Science Week 2.0: Design Your Future.' It celebrated how science empowers people to 'Design Your Future.' Numerous events were held in every county, and regional festivals took place in Mayo, Sligo, Galway, Waterford, Cork, Limerick and the Midlands. Science Week 2016 took place from 13–20 November. The theme was 'Science Rising,' which looked at how science is key to our success — part of our past, an important part of our present, and with endless potential still to be realised.
Science Week Ireland is an annual, week-long event held each November that celebrates science in everyday life. It is an initiative of Science Foundation Ireland (SFI) and is the largest science festival in the country, engaging tens of thousands of people in workshops, science shows, talks, laboratory demonstrations, science walks, and other science-related events. Science Week is a collaboration involving industry, colleges, schools, libraries, teachers, researchers, and students throughout Ireland. It supports Science Foundation Ireland’s mission to catalyse, inspire, and guide the best in science, technology, engineering, and maths (STEM) education and public engagement. The ultimate aim of this effort is for Ireland to have the most engaged and scientifically informed public by two thousand twenty, as outlined in Science Foundation Ireland’s strategy, Agenda two thousand twenty. This also aligns with the national science innovation strategy, Innovation two thousand twenty. History: Over the years, Science Week Ireland has grown from a small pilot initiative to a large promotional and event engine and is now a recognised vehicle for regional activity supported by a national promotional campaign. In nineteen ninety-five, a National Science Week was organised by the Royal Dublin Society and a number of other organisations to inform the general public about science. The first Science Week organised by Forfás was held in nineteen ninety-six. It was run by Forfás on behalf of the Office of Science and Technology at the Department of Jobs, Enterprise and Innovation under the name 'Information Technology and Science Week.' The week aimed to raise general awareness of the benefits of science and information technology to people of all ages. Professional bodies, voluntary groups, colleges, businesses and the public sector combined to organise events countrywide, including conferences, lectures, interactive exhibitions, debates and competitions for primary school students. In nineteen ninety-six Forfás organised the first Information Technology and Science Week, which began on twenty-five November. In nineteen ninety-seven it was renamed Science Week and ran from ten to sixteen November; it was launched by Minister Noel Treacy in Galway. That year about fifty events were held nationwide, including answering scientific questions for schoolchildren and a Speakathon organised by the Irish Research Scientists' Association. Science Week nineteen ninety-eight ran from one to eight November and featured talks in public libraries and another Speakathon. SFI took over Science Week from the Forfás Discover Science and Engineering programme in two thousand twelve. Science Week has since continued to grow into a week-long celebration of STEM public engagement, enhancing public interest in STEM and helping people see its relevance to their daily lives. Forfás sought feedback on the running of Science Week and it was also externally evaluated. two thousand seven Science Week took place between eleven to eighteen November two thousand seven and the theme was 'Surrounded by Science'. The programme illustrated that behind the everyday objects in our lives is a great inventor, scientist or engineer. Lectures featured Craig Johnston, inventor of the Adidas Predator; Joe F. Edwards Jr., former NASA astronaut; and Dr Sheila Willis, Director of the Forensic Science Laboratory. two thousand seven was the eleventh year of Science Week and saw an estimated ninety five thousand people attend lectures, exhibitions and workshops throughout the country. The two thousand eight Science Week took place between eleven to sixteen November two thousand eight. The theme was 'Science – Shaping Our World', celebrating the International Year of Planet Earth. Guest lecturers included Professor Aubrey Manning, distinguished zoologist and broadcaster; Gerry Johnston, director of Special Effects Ireland; Dr Cynthia Breazeal, Associate Professor at the Massachusetts Institute of Technology; Stephen Attenborough of Virgin Galactic; and Patrick Collison, Irish Young Scientist of the Year two thousand five. These talks can be viewed on YouTube. Science Week two thousand nine took place between eight to fifteen November two thousand nine. The theme was 'Science – Inspiring Creativity and Innovation', linking to the European Year of Creativity and Innovation. In the summer of two thousand nine, DSE launched a Twitter account for the latest news on Science Week. A lecture series included speakers from the Tyndall National Institute, Cork, and Sustainable Energy Ireland; these can be viewed on YouTube. Science Week two thousand ten ran from seven–fourteen November. Its theme was "Our Place in Space", which explored the latest happenings in astronomy, Ireland's role in the space industry, and the vital role played by science, technology, engineering and mathematics (STEM) in helping us make sense of our universe. Science Week two thousand eleven ran from thirteen–twenty November. The theme was "The Chemistry of Life", demonstrating the importance of chemistry to our everyday lives, from the atoms that are the building blocks of nature to the chemistry that creates lasting bonds between people. Science Week two thousand twelve ran from eleven–eighteen November. The theme was "Everyday Experimenting", highlighting how we are all involved in science every day, carrying out scientific processes and experimenting even when not aware of it. Science Week two thousand thirteen ran from ten–seventeen November. The theme was "Exploring the XTRA-Ordinary", which called on the public to go behind the scenes of everyday life and explore the extraordinary processes taking place in front of our eyes. In two thousand fourteen an estimated two hundred fifty thousand people took part in science festivals, demonstrations, seminars and tours across the country during the nineteenth annual national Science Week, which took place from nine–sixteen November two thousand fourteen. The theme was 'Power of Science.' Over eight hundred events took place across Ireland, including science festivals in Sligo, Galway, Mayo, Dublin, Cork, Waterford and the Midlands, aiming to "entertain, educate and enthral young and old alike with the power of science." Jamie Heaslip acted as a Science Week ambassador. Two thousand fifteen marked the twentieth anniversary of Science Week, which took place from eight to fifteen November. The theme was 'Science Week two point zero: Design Your Future.' It celebrated how science empowers people to 'Design Your Future.' Numerous events were held in every county, and regional festivals took place in Mayo, Sligo, Galway, Waterford, Cork, Limerick and the Midlands. Science Week two thousand sixteen took place from thirteen to twenty November. The theme was 'Science Rising,' which looked at how science is key to our success — part of our past, an important part of our present, and with endless potential still to be realised.
long_en_290
wiki_en
829
en
The C40 Cities Climate Leadership Group’s Climate Positive Development Program was launched in May 2009 in partnership with the Clinton Climate Initiative and the U.S. Green Building Council. The program brings together district-scale new-build and regeneration projects working to achieve "Climate Positive" (net carbon negative) outcomes in cities worldwide. As part of C40’s Sustainable Communities Initiative, it aims to create a model for large-scale urban communities and to support projects that serve as urban laboratories for cities seeking to grow in ways that are environmentally sustainable, climate resilient, and economically viable. Climate Positive is an exclusive program with a competitive application process and currently comprises 17 global projects that will collectively reduce the emissions impact of more than one million people. The cities hosting these projects support local implementation and share best practices globally through the C40 Climate Positive Network. The projects are in different stages of development but share key characteristics such as high density, highly efficient buildings, mixed-use zoning, and transit accessibility. Climate Positive was developed by the Green Building Council and launched at the C40 Cities Climate Leadership Group summit in Seoul, South Korea, in May 2009. At launch, Climate Positive had 16 founding projects on six continents, supported by local governments and property developers. In October 2012, Copenhagen’s Nordhavn project joined the program, and in São Paulo, Odebrecht’s Parque da Cidade (Park of the City) formally launched with a large kickoff event, bringing the total number of projects to 17. The current projects are located in Melbourne and Sydney, Australia; Palhoça and São Paulo, Brazil; Toronto, Ontario, Canada; Victoria, British Columbia, Canada; Copenhagen, Denmark; Ahmedabad and Jaipur, India; Pretoria, South Africa; Seoul, South Korea; Stockholm, Sweden; London, the United Kingdom; and Oberlin, Portland, and San Francisco, the United States. Objectives: With the primary objective of building Climate Positive (operational net carbon-negative) districts in cities, the Climate Positive Development Program attempts to change the paradigm of district-scale development through three main activities: recognizing exemplary achievement; sharing best practices and challenges experienced among development partners; and facilitating the broader implementation in cities of scalable projects, policies, and programs with low carbon emissions. Leadership: In April 2013 it was announced that the Mayor of São Paulo, Fernando Haddad, and the Mayor of Stockholm, Sten Nordin would share the chairmanship of Climate Positive and together lead the network, due to their leadership and commitment to finding replicable city-scale solutions to address climate change. How it Works: Each Climate Positive Development project has a unique profile determined by its distinct economic, political, and climate challenges; however, every project aims to lower its operational greenhouse gas emissions to below zero. Moreover, development partners across the 18 projects are expected to focus on reducing operational carbon emissions at the district scale from the transportation, energy, and waste sectors, and are required to share the solutions they develop. The program also provides technical and logistical support by hosting learning programs and webinars, convening private-sector firms to produce tools and templates for project use, increasing project visibility through various media channels, and granting access to technical experts and other partners within the Climate Positive and C40 networks. To become Climate Positive and achieve net carbon-negative outcomes, development partners earn Climate Positive Credits by sequestering emissions on-site and abating emissions in surrounding communities. There are many paths to the Climate Positive outcome of net-negative operational GHG emissions; each project will use a different set of strategies and technologies according to local opportunities, guided by the Climate Positive Development Framework, which outlines the four stages of becoming Climate Positive. As projects move through the four recognition stages — Climate Positive Candidate, Climate Positive Participant, Progress Site, and, at completion, Climate Positive certification — development partners submit documentation to the program to ensure they remain on track and receive feedback from program staff and affiliated technical experts. Projects: - Victoria Harbour, Melbourne, Australia - Barangaroo, Sydney, Australia - Parque da Cidade, São Paulo, Brazil - Pedra Branca Sustainable Urbanism, Palhoça, Greater Florianópolis, Brazil - Dockside Green, Victoria, BC, Canada - Waterfront Toronto (Lower Don Lands), Toronto, ON, Canada - Nordhavn, Copenhagen, Denmark - ProjectZero, Sonderborg, Denmark - Godrej Garden City, Ahmedabad, India - Mahindra World City, Jaipur, India - Menlyn Maine, Pretoria, South Africa - Magok Urban Development Project, Seoul, South Korea - Stockholm Royal Seaport, Stockholm, Sweden - Elephant & Castle, London, UK - Treasure Island Development Project, San Francisco, CA, USA - The Oberlin Project, Oberlin, OH, USA - South Waterfront EcoDistricts, Portland, OR, USA - The Shinagawa Project, Tokyo, Japan See also: - C40 Cities Climate Leadership Group - Adaptation to global warming - Climate change mitigation - Covenant of Mayors - Energy conservation - Global Energy Basel - ICLEI - Local Governments for Sustainability - Individual and political action on climate change - London Climate Change Agency - PlaNYC - Renewable energy - World energy resources and consumption - World's largest cities External links: - climatepositivedevelopment.org - Cities Go Climate Positive - C40 Cities official website - Clinton Climate Initiative - Environmental organizations based in the United States
The C40 Cities Climate Leadership Group’s Climate Positive Development Program was launched in May two thousand nine in partnership with the Clinton Climate Initiative and the U.S. Green Building Council. The program brings together district-scale new-build and regeneration projects working to achieve "Climate Positive" (net carbon negative) outcomes in cities worldwide. As part of C40’s Sustainable Communities Initiative, it aims to create a model for large-scale urban communities and to support projects that serve as urban laboratories for cities seeking to grow in ways that are environmentally sustainable, climate resilient, and economically viable. Climate Positive is an exclusive program with a competitive application process and currently comprises seventeen global projects that will collectively reduce the emissions impact of more than one million people. The cities hosting these projects support local implementation and share best practices globally through the C40 Climate Positive Network. The projects are in different stages of development but share key characteristics such as high density, highly efficient buildings, mixed-use zoning, and transit accessibility. Climate Positive was developed by the Green Building Council and launched at the C40 Cities Climate Leadership Group summit in Seoul, South Korea, in May two thousand nine. At launch, Climate Positive had sixteen founding projects on six continents, supported by local governments and property developers. In October two thousand twelve, Copenhagen’s Nordhavn project joined the program, and in São Paulo, Odebrecht’s Parque da Cidade (Park of the City) formally launched with a large kickoff event, bringing the total number of projects to seventeen. The current projects are located in Melbourne and Sydney, Australia; Palhoça and São Paulo, Brazil; Toronto, Ontario, Canada; Victoria, British Columbia, Canada; Copenhagen, Denmark; Ahmedabad and Jaipur, India; Pretoria, South Africa; Seoul, South Korea; Stockholm, Sweden; London, the United Kingdom; and Oberlin, Portland, and San Francisco, the United States. Objectives: With the primary objective of building Climate Positive (operational net carbon negative) districts in cities, the Climate Positive Development Program attempts to change the paradigm of district scale development through three main activities: recognizing exemplary achievement; sharing best practices and challenges experienced among development partners; and facilitating the broader implementation in cities of scalable projects, policies, and programs with low carbon emissions. Leadership: In April two thousand thirteen it was announced that the Mayor of São Paulo, Fernando Haddad, and the Mayor of Stockholm, Sten Nordin would share the chairmanship of Climate Positive and together lead the network, due to their leadership and commitment to finding replicable city scale solutions to address climate change. How it Works: Each Climate Positive Development project has a unique profile determined by its distinct economic, political, and climate challenges; however, every project aims to lower its operational greenhouse gas emissions to below zero. Moreover, development partners across the eighteen projects are expected to focus on reducing operational carbon emissions at the district scale from the transportation, energy, and waste sectors, and are required to share the solutions they develop. The program also provides technical and logistical support by hosting learning programs and webinars, convening private-sector firms to produce tools and templates for project use, increasing project visibility through various media channels, and granting access to technical experts and other partners within the Climate Positive and C40 networks. To become Climate Positive and achieve net carbon-negative outcomes, development partners earn Climate Positive Credits by sequestering emissions on-site and abating emissions in surrounding communities. There are many paths to the Climate Positive outcome of net-negative operational GHG emissions; each project will use a different set of strategies and technologies according to local opportunities, guided by the Climate Positive Development Framework, which outlines the four stages of becoming Climate Positive. As projects move through the four recognition stages — Climate Positive Candidate, Climate Positive Participant, Progress Site, and, at completion, Climate Positive certification — development partners submit documentation to the program to ensure they remain on track and receive feedback from program staff and affiliated technical experts. Projects: - Victoria Harbour, Melbourne, Australia - Barangaroo, Sydney, Australia - Parque da Cidade, São Paulo, Brazil - Pedra Branca Sustainable Urbanism, Palhoça, Greater Florianópolis, Brazil - Dockside Green, Victoria, BC, Canada - Waterfront Toronto (Lower Don Lands), Toronto, ON, Canada - Nordhavn, Copenhagen, Denmark - ProjectZero, Sonderborg, Denmark - Godrej Garden City, Ahmedabad, India - Mahindra World City, Jaipur, India - Menlyn Maine, Pretoria, South Africa - Magok Urban Development Project, Seoul, South Korea - Stockholm Royal Seaport, Stockholm, Sweden - Elephant and Castle, London, UK - Treasure Island Development Project, San Francisco, CA, USA - The Oberlin Project, Oberlin, OH, USA - South Waterfront EcoDistricts, Portland, OR, USA - The Shinagawa Project, Tokyo, Japan See also: - C40 Cities Climate Leadership Group - Adaptation to global warming - Climate change mitigation - Covenant of Mayors - Energy conservation - Global Energy Basel - ICLEI - Local Governments for Sustainability - Individual and political action on climate change - London Climate Change Agency - PlaNYC - Renewable energy - World energy resources and consumption - World's largest cities External links: - climatepositivedevelopment dot org - Cities Go Climate Positive - C40 Cities official website - Clinton Climate Initiative - Environmental organizations based in the United States.
long_en_322
poet_en
613
en
One day I was watching DuckTales with Daddy in the evening and he suddenly started laughing hysterically for no reason. "Doesn't your mother laugh like that?" he said, pointing at Donald Duck. "Mommy doesn't laugh like that." "Oh yes, she doesn't. I love your mother very much," he said suddenly to me, "but I have to say her laughter sounds closer to a platypus than a duck." And he started laughing again. I didn't know what a platypus was. I only knew about Tom and Jerry and Uncle Scrooge. "What's a pootipus, Daddy?" I asked, and he broke into a fit of laughter. "Oh my God, what is she feeding this kid? It's called platypus, sweetie, not pootipus." I didn't know what came over him. Maybe he realized I didn't know what a platypus was, so he did an impression of one. He pressed his hands to his lips and made duck-like "quacks" during his performance. It wasn't funny. I thought a platypus looked like a cat with a duck's beak. The thought frightened me. Daddy had to promise he'd take me to the zoo to show me what a platypus looked like to stop my screams. "Mommy doesn't laugh like that," I told him and kicked the floor in anger. "Daddy is a liar. First he said I taste like salt and dirt. Then he said I'm still three, not four years old. But I believe Mommy because she is the smartest—just like Anna—and she doesn't laugh like a platypus. She laughs funny but she doesn't sound like a duck!" "That's horrible." "I'm going to tell Mommy!" I screamed and ran outside to the garden, where Mommy was relaxing in the sun. "Mommy!" I yelled, and she hurriedly stood up. "What happened, sweetie?" she said, and I told her everything. "Daddy said, 'You laugh like a platypus.' He even did an impression of one." I was angry at Daddy, but when I stopped talking to look at Mommy, I understood I had made a mistake. Mommy was angry. Poppy had told me never to make fun of Mommy's laugh, but I'd forgotten in my anger. "She gets really angry when someone does it," he had told me during one of my trips to his home in the States. Even Daddy knew how bad Mommy's anger was, so he stopped chasing me when he heard me tattling on him and ran back into the house. Mommy chased him. I ran after them both, screaming so loudly that someone called the police, thinking we were in trouble. One time my art teacher, Miss Jenny, had called Mommy to the school because I had refused to share my crayons with Stan Luther. I didn't like Stan. He was a mean boy. He used to push me from behind when others weren't watching. When Mommy asked why I wasn't sharing, I told her the reason, and she got so angry that even Miss Jenny started sweating. Stan never pushed me after that. We both laughed as we left school. It was the best day of my life. Miss Jenny never asked me to share again. All thanks to Mommy's anger. But this time I'd done something wrong. Poppy once told me, "Relations founded on a strong base don't break easily, but nobody likes a rat." I like Jerry, but I don't hate Tom the Cat. I felt like a child chasing them. Mommy didn't laugh that night, and Daddy didn't kiss me goodnight either.
One day I was watching DuckTales with Daddy in the evening and he suddenly started laughing hysterically for no reason. "Doesn't your mother laugh like that?" he said, pointing at Donald Duck. "Mommy doesn't laugh like that." "Oh yes, she doesn't. I love your mother very much," he said suddenly to me, "but I have to say her laughter sounds closer to a platypus than a duck." And he started laughing again. I didn't know what a platypus was. I only knew about Tom and Jerry and Uncle Scrooge. "What's a pootipus, Daddy?" I asked, and he broke into a fit of laughter. "Oh my God, what is she feeding this kid? It's called platypus, sweetie, not pootipus." I didn't know what came over him. Maybe he realized I didn't know what a platypus was, so he did an impression of one. He pressed his hands to his lips and made duck-like "quacks" during his performance. It wasn't funny. I thought a platypus looked like a cat with a duck's beak. The thought frightened me. Daddy had to promise he'd take me to the zoo to show me what a platypus looked like to stop my screams. "Mommy doesn't laugh like that," I told him and kicked the floor in anger. "Daddy is a liar. First he said I taste like salt and dirt. Then he said I'm still three, not four years old. But I believe Mommy because she is the smartest—just like Anna—and she doesn't laugh like a platypus. She laughs funny but she doesn't sound like a duck!" "That's horrible." "I'm going to tell Mommy!" I screamed and ran outside to the garden, where Mommy was relaxing in the sun. "Mommy!" I yelled, and she hurriedly stood up. "What happened, sweetie?" she said, and I told her everything. "Daddy said, 'You laugh like a platypus.' He even did an impression of one." I was angry at Daddy, but when I stopped talking to look at Mommy, I understood I had made a mistake. Mommy was angry. Poppy had told me never to make fun of Mommy's laugh, but I'd forgotten in my anger. "She gets really angry when someone does it," he had told me during one of my trips to his home in the States. Even Daddy knew how bad Mommy's anger was, so he stopped chasing me when he heard me tattling on him and ran back into the house. Mommy chased him. I ran after them both, screaming so loudly that someone called the police, thinking we were in trouble. One time my art teacher, Miss Jenny, had called Mommy to the school because I had refused to share my crayons with Stan Luther. I didn't like Stan. He was a mean boy. He used to push me from behind when others weren't watching. When Mommy asked why I wasn't sharing, I told her the reason, and she got so angry that even Miss Jenny started sweating. Stan never pushed me after that. We both laughed as we left school. It was the best day of my life. Miss Jenny never asked me to share again. All thanks to Mommy's anger. But this time I'd done something wrong. Poppy once told me, "Relations founded on a strong base don't break easily, but nobody likes a rat." I like Jerry, but I don't hate Tom the Cat. I felt like a child chasing them. Mommy didn't laugh that night, and Daddy didn't kiss me goodnight either.
long_en_267
wiki_en
987
en
LittleBigPlanet PS Vita is a puzzle-platform video game developed by Double Eleven, Tarsier Studios, and XDev Studio Europe for the PlayStation Vita handheld console. It is the fourth game in the LittleBigPlanet franchise, a series of puzzle-platformers centered on user-generated content. The game was announced in January 2011 alongside the reveal of the PlayStation Vita, then known as the Next Generation Portable (NGP). The first details were revealed on 6 June 2011 at the Electronic Entertainment Expo. It was released on 19 September 2012 in Europe, 20 September 2012 in Japan and Australia, and 25 September 2012 in North America. The online servers were permanently shut down in March 2021 after suffering attacks during the previous year. Gameplay: As in previous titles in the LittleBigPlanet series, players control a character named Sackboy through a variety of worlds, utilizing the character's abilities such as jumping and grabbing objects. The game also features non-platforming mini-games and numerous multiplayer options. In addition to up to four-player online competitive or cooperative play, the PlayStation Vita's multi-touch display can be used by two players for competitive games. Pass'n'Play is also available, enabling turn-based gameplay. The Vita's rear touch panel is used for pushing objects toward the player and creating platforms out of parts of the world. Content creation: Players can create their own levels and share them online via the PlayStation Network. The PlayStation Vita's touch-screen display can be used to draw objects and platforms directly in the game world. In addition to these unique creation tools, the game includes all tools available in LittleBigPlanet 2 (with the exception of DLC tools such as the Wormhole). However, materials and stickers/decorations from prior games are not available. Costumes purchased from the PlayStation Network to customise the player's character are transferable between the PlayStation 3 and PlayStation Vita versions, including LittleBigPlanet Karting, though costumes from the PSP game are not available in LittleBigPlanet PS Vita. Content can also be shared over both Wi-Fi and 3G networks. Additional content: The PlayStation Vita version includes tools from previous LittleBigPlanet games adapted for the Vita's control system, as well as new tools such as the Motion Recorder, Touch Sensor, Touch Tweaker, Touch Cursor, Touch Material and Layer Tool, all of which provide touch-based controls. There are also tools for other purposes, such as the Dephysicalise Tool, the Sticker Scrubber and a tool that creates a jelly-like substance that Sackboy can pass through. Another new tool, the Memoriser, can store data between levels and play sessions. Those who pre-ordered the game received a bonus BioShock costume pack. The pack includes Big Daddy and Little Sister costumes. A "Knights of Old" pack was also announced, offering knight, dragon, and princess costumes. Plot: The story follows a puppeteer believed to control wooden puppets called the Hollows, dangerous enemies of Carnivalia. He created them as replacements for the puppets he had discarded after being booed at a circus. To stop the threat, Sackboy travels through locations such as the Land of Odd, meeting characters who help him reach the Spooky Mansion, where the puppeteer lives. At the mansion, a tape reveals the puppeteer's name is Franklin, and he has been captured by his own Hollows. After Sackboy frees him, Franklin explains the Hollows were unwanted creations from an attempt to recreate his puppet friends — the Creator Curators Sackboy had encountered on the way to the mansion. Bitter from always being behind the scenes, Franklin had thrown them away; when he later sought them, a tear he shed brought them fully to life. Franklin is ecstatic to reunite with his now-living friends, and when they laugh, the Hollows revert to their original Sackboy forms. Reception: LittleBigPlanet PS Vita received positive reviews, garnering aggregate scores of 88/100 on Metacritic and 88.68% on GameRankings. Justin Calvert of GameSpot called the game the best in the series so far and stated, "This is the game that your Vita has been waiting for. For months, the shiny handheld has been aching to show you what it's really capable of, and with the arrival of Little Big Planet PS Vita, it finally has an opportunity to do so." Calvert, who gave the game an 8.5 out of 10, praised the "wonderfully varied" story levels, "excellent" controls, and the "easier than ever" creation tools, but criticized the tutorials for feeling incomplete. Matt Helgeson of Game Informer wrote, "While LittleBigPlanet has clearly settled into a comfortable groove, it’s still one of the best pure platformers on the market. LittleBigPlanet PS Vita is another stellar entry on Sackboy’s impressive resume." Helgeson awarded the game an 8.75/10 and praised the overall design, graphics, soundtrack, and the developers' ability to create a LittleBigPlanet game on par with the main games. In his review, IGN's Greg Miller concluded, "LittleBigPlanet PS Vita is the definitive LittleBigPlanet game. It's everything you loved (or possibly didn't) from the past games boiled down into a package you can play anywhere at any time." You can collect prize bubbles while watching TV, download user-created levels at home and then play them on a plane, and sink hours into learning Create mode via 10-minute chunks at the laundromat. There are also new features like touch controls, games that don't involve Sackboy, and creation tools that could give you an endless supply of free games. Yes, the jumping is still floaty, the creation tools are complicated, and the load times are a bit too long, but that doesn't stop LittleBigPlanet PS Vita from being an amazing experience. Sophia Tong of GamesRadar commended the narration by Stephen Fry, the controls, and the story levels, saying, "LittleBigPlanet PS Vita encapsulates what the system can do and deserves a spot in your Vita library."
LittleBigPlanet PS Vita is a puzzle-platform video game developed by Double Eleven, Tarsier Studios, and XDev Studio Europe for the PlayStation Vita handheld console. It is the fourth game in the LittleBigPlanet franchise, a series of puzzle-platformers centered on user-generated content. The game was announced in January two thousand eleven alongside the reveal of the PlayStation Vita, then known as the Next Generation Portable (NGP). The first details were revealed on six June two thousand eleven at the Electronic Entertainment Expo. It was released on nineteen September two thousand twelve in Europe, twenty September two thousand twelve in Japan and Australia, and twenty five September two thousand twelve in North America. The online servers were permanently shut down in March two thousand twenty one after suffering attacks during the previous year. Gameplay: As in previous titles in the LittleBigPlanet series, players control a character named Sackboy through a variety of worlds, utilizing the character's abilities such as jumping and grabbing objects. The game also features non-platforming mini-games and numerous multiplayer options. In addition to up to four-player online competitive or cooperative play, the PlayStation Vita's multi-touch display can be used by two players for competitive games. Pass'n'Play is also available, enabling turn-based gameplay. The Vita's rear touch panel is used for pushing objects toward the player and creating platforms out of parts of the world. Content creation: Players can create their own levels and share them online via the PlayStation Network. The PlayStation Vita's touch-screen display can be used to draw objects and platforms directly in the game world. In addition to these unique creation tools, the game includes all tools available in LittleBigPlanet two (with the exception of DLC tools such as the Wormhole). However, materials and stickers/decorations from prior games are not available. Costumes purchased from the PlayStation Network to customise the player's character are transferable between the PlayStation three and PlayStation Vita versions, including LittleBigPlanet Karting, though costumes from the PSP game are not available in LittleBigPlanet PS Vita. Content can also be shared over both Wi-Fi and three G networks. Additional content: The PlayStation Vita version includes tools from previous LittleBigPlanet games adapted for the Vita's control system, as well as new tools such as the Motion Recorder, Touch Sensor, Touch Tweaker, Touch Cursor, Touch Material and Layer Tool, all of which provide touch-based controls. There are also tools for other purposes, such as the Dephysicalise Tool, the Sticker Scrubber and a tool that creates a jelly-like substance that Sackboy can pass through. Another new tool, the Memoriser, can store data between levels and play sessions. Those who pre-ordered the game received a bonus BioShock costume pack. The pack includes Big Daddy and Little Sister costumes. A "Knights of Old" pack was also announced, offering knight, dragon, and princess costumes. Plot: The story follows a puppeteer believed to control wooden puppets called the Hollows, dangerous enemies of Carnivalia. He created them as replacements for the puppets he had discarded after being booed at a circus. To stop the threat, Sackboy travels through locations such as the Land of Odd, meeting characters who help him reach the Spooky Mansion, where the puppeteer lives. At the mansion, a tape reveals the puppeteer's name is Franklin, and he has been captured by his own Hollows. After Sackboy frees him, Franklin explains the Hollows were unwanted creations from an attempt to recreate his puppet friends — the Creator Curators Sackboy had encountered on the way to the mansion. Bitter from always being behind the scenes, Franklin had thrown them away; when he later sought them, a tear he shed brought them fully to life. Franklin is ecstatic to reunite with his now-living friends, and when they laugh, the Hollows revert to their original Sackboy forms. Reception: LittleBigPlanet PS Vita received positive reviews, garnering aggregate scores of eighty eight out of one hundred on Metacritic and eighty eight point six eight percent on GameRankings. Justin Calvert of GameSpot called the game the best in the series so far and stated, "This is the game that your Vita has been waiting for. For months, the shiny handheld has been aching to show you what it's really capable of, and with the arrival of Little Big Planet PS Vita, it finally has an opportunity to do so." Calvert, who gave the game an eight point five out of ten, praised the "wonderfully varied" story levels, "excellent" controls, and the "easier than ever" creation tools, but criticized the tutorials for feeling incomplete. Matt Helgeson of Game Informer wrote, "While LittleBigPlanet has clearly settled into a comfortable groove, it’s still one of the best pure platformers on the market. LittleBigPlanet PS Vita is another stellar entry on Sackboy’s impressive resume." Helgeson awarded the game an eight point seven five slash ten and praised the overall design, graphics, soundtrack, and the developers' ability to create a LittleBigPlanet game on par with the main games. In his review, IGN's Greg Miller concluded, "LittleBigPlanet PS Vita is the definitive LittleBigPlanet game. It's everything you loved (or possibly didn't) from the past games boiled down into a package you can play anywhere at any time." You can collect prize bubbles while watching TV, download user-created levels at home and then play them on a plane, and sink hours into learning Create mode via ten-minute chunks at the laundromat. There are also new features like touch controls, games that don't involve Sackboy, and creation tools that could give you an endless supply of free games. Yes, the jumping is still floaty, the creation tools are complicated, and the load times are a bit too long, but that doesn't stop LittleBigPlanet PS Vita from being an amazing experience. Sophia Tong of GamesRadar commended the narration by Stephen Fry, the controls, and the story levels, saying, "LittleBigPlanet PS Vita encapsulates what the system can do and deserves a spot in your Vita library.".
long_en_277
wiki_en
1,068
en
The Cuban Revolution of 1933, also called the Sergeants' Revolt, was a coup d'état that occurred in Cuba in September 1933. It began as a revolt of sergeants and enlisted men in the military, who soon allied with student activists in the Directorio Estudiantil Universitario. The coup deposed Carlos Manuel de Céspedes y Quesada as president, installing a new government led by a five-man coalition known as the Pentarchy of 1933. After only five days, the Pentarchy gave way to the presidency of Ramón Grau, whose term is known as the One Hundred Days Government. The leader of the revolt, Sergeant Fulgencio Batista, became the head of the armed forces and began a long period of influence on Cuban politics. Background: The authoritarian policies of Gerardo Machado and the Great Depression, beginning in 1929, plunged Cuba into an economic and social crisis, during which opposition groups proliferated. Pressure and demonstrations by the Directorio Estudiantil Universitario (Student Directory), workers, and U.S. Ambassador Sumner Welles forced Machado to resign. Carlos Manuel de Céspedes y Quesada led a provisional government that included members of the opposition group ABC in its cabinet. Other elements of the Machado opposition were unsatisfied with the provisional government, which they saw as an unacceptable compromise with U.S. interventionism. On August 24, the Student Directory issued a Manifesto-Program that denounced the ABC and made various demands, including the formation of a new government. After the fall of Machado, the military perceived its situation as precarious. Opposition forces controlled Havana and took revenge on supporters of the Machado regime, including police and some soldiers. The military was reluctant to intervene in this situation, lest the public perceive it as an agent of the old regime. The arrest of 50 soldiers and 21 officers did not satisfy demands for reform. Critics of the Céspedes government, including members of the military, charged that it was not taking sufficient action against Machado's backers within the armed forces and that it had failed to reinstate officers who had opposed Machado. This situation exacerbated longstanding tensions related to age, class, and race among the ranks of officers. A group of sergeants began meeting at the Columbia barracks, forming the Columbia Military Union. Their ambition to improve conditions in the army quickly expanded into a plan for regime change. This group, later called the Junta of the Eight despite uncertainty about the number of members, included Batista and other members of his ABC cell, as well as Pablo Rodríguez, whom some perceived to be the group's leader. A funeral for Sergeant Miguel Ángel Hernández y Rodríguez, who had been captured and killed by the Machado government in May 1933, took place on 19 August 1933. This gave Batista the opportunity to deliver a passionate speech that brought him attention as a future leader. At the funeral he met journalist Sergio Carbó, who became an important contact in the civilian world. In August, the group of sergeants issued a manifesto calling for dignity, respect, and benefits for soldiers, and declaring their duty to rebel. Batista asked the ABC, of which he was a member, to publicize the manifesto. The ABC, having established itself as part of the status quo government, refused, and Batista and others left the group. Other factions within the military were also plotting against the Céspedes government, and some spoke openly against it. As the movement grew, the plotters met in larger venues, including the Masonic Gran Logia de Cuba and a military hospital. These preparations became somewhat obvious, but meetings continued under the pretext of planning projects to improve enlisted men's quality of life. The action mostly took place in Havana, with some outreach to Matanzas Province shortly before the coup. On September 3 and 4, some lower-ranking officers at Camp Columbia directly raised issues of back pay and promotions with senior officers. On September 4, Captain Mario Torres Menier appeared at a meeting of the enlisted men at Camp Columbia. Batista allowed him to enter. The soldiers made their complaints with mounting enthusiasm; Torres Menier withdrew to consult with other superior officers. Another meeting was scheduled for 8 p.m. In the interim, leaders of the coup rallied their supporters. Batista contacted Carbó and secured the support of Juan Blas Hernández, a rebel who had opposed Machado for two years. The meeting that evening took place in a theater. The senior officers had been excluded. Batista spoke from onstage, declaring, "From this moment forward, do not obey anyone's orders but mine. First sergeants must immediately take control of their respective military units. If there is no first sergeant, or if he refuses to take command, the senior sergeant must do so. If there is no sergeant, a corporal. If there is no willing corporal, then a soldier, and if not, then a recruit. The units must have someone in command, and he must be an enlisted man." Thus the sergeants took uncontested control of Columbia barracks and soon established communications with sympathetic officers in other cities. Members of the Student Directory—beginning with José Leyva, Ramiro Valdés Daussá, Juan António Rubio Padilla, Carlos Prío Socarrás, Rubén de León, and Justo Carrillo—came to the barracks and joined forces with the army. While President Céspedes was away from Havana to survey hurricane damage, the rebels forced the remaining government officers in Havana to leave their posts. They then issued a proclamation announcing that they were in control of the country and set up a pentarchy modeled on the then-current government of Uruguay. After President Céspedes returned on September 5, members of the junta arrived at his office and informed him that they were to receive the government from him. Swayed by their claim to command the allegiance of the military rank and file, Céspedes vacated the presidential palace. The junta of officers and students proclaimed that it had taken power to fulfill the aims of the revolution; it briefly described a program that included economic restructuring, punishment of wrongdoers, recognition of public debts, establishment of courts, political reorganization, and other actions necessary to construct a new Cuba based on justice and democracy. Both Grau and Batista visited Welles on September 5 to seek support from the United States and ascertain its position.
The Cuban Revolution of nineteen thirty three, also called the Sergeants' Revolt, was a coup d'état that occurred in Cuba in September nineteen thirty three. It began as a revolt of sergeants and enlisted men in the military, who soon allied with student activists in the Directorio Estudiantil Universitario. The coup deposed Carlos Manuel de Céspedes y Quesada as president, installing a new government led by a five-man coalition known as the Pentarchy of nineteen thirty three. After only five days, the Pentarchy gave way to the presidency of Ramón Grau, whose term is known as the One Hundred Days Government. The leader of the revolt, Sergeant Fulgencio Batista, became the head of the armed forces and began a long period of influence on Cuban politics. Background: The authoritarian policies of Gerardo Machado and the Great Depression, beginning in nineteen twenty nine, plunged Cuba into an economic and social crisis, during which opposition groups proliferated. Pressure and demonstrations by the Directorio Estudiantil Universitario (Student Directory), workers, and U.S. Ambassador Sumner Welles forced Machado to resign. Carlos Manuel de Céspedes y Quesada led a provisional government that included members of the opposition group ABC in its cabinet. Other elements of the Machado opposition were unsatisfied with the provisional government, which they saw as an unacceptable compromise with U.S. interventionism. On August twenty fourth, the Student Directory issued a Manifesto-Program that denounced the ABC and made various demands, including the formation of a new government. After the fall of Machado, the military perceived its situation as precarious. Opposition forces controlled Havana and took revenge on supporters of the Machado regime, including police and some soldiers. The military was reluctant to intervene in this situation, lest the public perceive it as an agent of the old regime. The arrest of fifty soldiers and twenty one officers did not satisfy demands for reform. Critics of the Céspedes government, including members of the military, charged that it was not taking sufficient action against Machado's backers within the armed forces and that it had failed to reinstate officers who had opposed Machado. This situation exacerbated longstanding tensions related to age, class, and race among the ranks of officers. A group of sergeants began meeting at the Columbia barracks, forming the Columbia Military Union. Their ambition to improve conditions in the army quickly expanded into a plan for regime change. This group, later called the Junta of the Eight despite uncertainty about the number of members, included Batista and other members of his ABC cell, as well as Pablo Rodríguez, whom some perceived to be the group's leader. A funeral for Sergeant Miguel Ángel Hernández y Rodríguez, who had been captured and killed by the Machado government in May one thousand nine hundred thirty three, took place on nineteen August one thousand nine hundred thirty three. This gave Batista the opportunity to deliver a passionate speech that brought him attention as a future leader. At the funeral he met journalist Sergio Carbó, who became an important contact in the civilian world. In August, the group of sergeants issued a manifesto calling for dignity, respect, and benefits for soldiers, and declaring their duty to rebel. Batista asked the ABC, of which he was a member, to publicize the manifesto. The ABC, having established itself as part of the status quo government, refused, and Batista and others left the group. Other factions within the military were also plotting against the Céspedes government, and some spoke openly against it. As the movement grew, the plotters met in larger venues, including the Masonic Gran Logia de Cuba and a military hospital. These preparations became somewhat obvious, but meetings continued under the pretext of planning projects to improve enlisted men's quality of life. The action mostly took place in Havana, with some outreach to Matanzas Province shortly before the coup. On September three and four, some lower-ranking officers at Camp Columbia directly raised issues of back pay and promotions with senior officers. On September four, Captain Mario Torres Menier appeared at a meeting of the enlisted men at Camp Columbia. Batista allowed him to enter. The soldiers made their complaints with mounting enthusiasm; Torres Menier withdrew to consult with other superior officers. Another meeting was scheduled for eight p.m. In the interim, leaders of the coup rallied their supporters. Batista contacted Carbó and secured the support of Juan Blas Hernández, a rebel who had opposed Machado for two years. The meeting that evening took place in a theater. The senior officers had been excluded. Batista spoke from onstage, declaring, "From this moment forward, do not obey anyone's orders but mine. First sergeants must immediately take control of their respective military units. If there is no first sergeant, or if he refuses to take command, the senior sergeant must do so. If there is no sergeant, a corporal. If there is no willing corporal, then a soldier, and if not, then a recruit. The units must have someone in command, and he must be an enlisted man." Thus the sergeants took uncontested control of Columbia barracks and soon established communications with sympathetic officers in other cities. Members of the Student Directory—beginning with José Leyva, Ramiro Valdés Daussá, Juan António Rubio Padilla, Carlos Prío Socarrás, Rubén de León, and Justo Carrillo—came to the barracks and joined forces with the army. While President Céspedes was away from Havana to survey hurricane damage, the rebels forced the remaining government officers in Havana to leave their posts. They then issued a proclamation announcing that they were in control of the country and set up a pentarchy modeled on the then-current government of Uruguay. After President Céspedes returned on September five, members of the junta arrived at his office and informed him that they were to receive the government from him. Swayed by their claim to command the allegiance of the military rank and file, Céspedes vacated the presidential palace. The junta of officers and students proclaimed that it had taken power to fulfill the aims of the revolution; it briefly described a program that included economic restructuring, punishment of wrongdoers, recognition of public debts, establishment of courts, political reorganization, and other actions necessary to construct a new Cuba based on justice and democracy. Both Grau and Batista visited Welles on September five to seek support from the United States and ascertain its position.
long_en_273
wiki_en
956
en
The concepts and structures of Jewish Kabbalah have been applied in the contemporary world to foster comparative dialogue with the modern sciences and humanities. This interaction is uncommon because it requires deep knowledge of both traditional Kabbalah and secular disciplines, and because Jewish modernity has often been marked by isolation and entrenchment between the two. Authorities engaging in this dialogue range from traditional Orthodox teachers of Kabbalah to Neo‑Kabbalistic and academic scholars who read Kabbalah critically and universally. Traditional Kabbalistic views regard the material world and the "lower wisdoms" it produces ambivalently, in contrast to the "higher wisdom" of Torah: the physical realm is seen as dominated by impurity and hidden divinity, yet the messianic aim entails uniting lower and higher wisdoms as a prerequisite for redemption and the full revelation of God. Over generations, the sparks of divinity within lower wisdoms are clarified as the sciences and humanities mature and ascend toward an eschatological union with higher divine wisdoms; still, because they arise from a world of plurality, the sciences and humanities represent partial perspectives divergent from Kabbalah's unified divine view. Kabbalists overcome rigid dogmas by recognizing internal paradoxes, self-limitations, and shared perspectives. Concomitantly, the higher wisdom of Kabbalah progressively descends and becomes increasingly revealed, clarified by analogies from the lower wisdoms. The marriage of the two heals the pre-messianic division of "waters" (wisdoms) expressed in Genesis 1:7: "And God made the firmament, dividing the waters which were under the firmament from the waters which were above the firmament." Traditional separatist Haredi followers of Kabbalah view engagement with secular thought as dangerous for the unqualified, but potentially redemptive for sages capable of clarifying unity, especially with the exact sciences. A productive dialogue between Kabbalah and secular knowledge has become possible with modern and postmodern developments in the sciences and humanities. However, humanities-based historical criticism in religious studies poses the main challenge to traditionalist views of revelation and to the formation of modern Jewish denominations. In contrast, neo-Kabbalistic approaches welcome views of revelation compatible with critical perspectives within Modernist or Open Orthodoxy, Jewish feminism, and non-Orthodox Jewish movements. On the eve of modernity, the Vilna Gaon (18th century) foresaw a messianic Kabbalistic redemption of the sciences. In the early 20th century, Abraham Isaac Kook described a mystical process in which the secular unconsciously deepens the sacred. The present generation has seen a proliferation of syntheses between Kabbalah and secular wisdoms, driven by Jewish outreach, traditional and Neo-Hasidic spirituality, the publication of esoteric mystical works, deep engagement with both Jewish and secular cultures, revisionist ideas compatible with mysticism, and a contemporary flourishing of scholarship and new perspectives in Jewish mystical studies. Kabbalistic attitudes toward secular knowledge have ranged from harmonization to opposition. Traditionalist Kabbalah and its development in Hasidic Judaism often viewed secular wisdoms negatively. Although some historical Kabbalists were learned in medieval Jewish philosophy and occasionally in mathematics and the sciences, their relationship to medieval Jewish philosophy—which itself was built on ancient Greek science and cosmology—was ambiguous. The spread of Kabbalah from the 12th century onward was partly a response to the rationalist influence of Maimonides amid controversies over his teachings. Nonetheless, philosophical terminology from Jewish philosophy, both Neoplatonic and Aristotelian, permeated Kabbalistic systems and was reinterpreted mystically. The Kabbalistic dictum "Kabbalah begins where philosophy ends" asserted claims to superior knowledge but can also be read as an acknowledgment of the foundations laid by Jewish philosophy. Kabbalists claimed to see further, offering mythological and psychological answers to philosophical questions while standing on the shoulders of philosophy. Although opposed to dogmatic rationalism, some mystics—such as the systematizer Moses Cordovero (16th century)—acknowledged and drew on Maimonides' philosophical purification of Jewish theology, which removed mistaken corporeal interpretations. In Cordovero's dialectical method, imagination was used to grasp and then reject anthropomorphic conceptions in Kabbalah. Judah Loew ben Bezalel (16th century) expressed mystical ideas using the philosophical and scientific terminology of his day, valuing the natural sciences only insofar as they were subordinate to revelation. Kabbalistic views on secular studies were shaped by both mystical convictions and social context. Shneur Zalman of Liadi (18th century) warned that impure secular wisdom can endanger ordinary faith, yet taught that great sages—such as the philosophical Maimonides (12th century) and the mystical Nachmanides (13th century)—can discern the concealed divinity in secular knowledge and clarify its unity with the Torah, revealing new esoteric dimensions. Pre-Messianic Kabbalah, for example Nachmanides' commentary on the Torah, relates the seven days of Creation in Genesis 1 to the seven lower sephirot—Divine attributes from Chesed to Malkhut. These comprise the "Revealed World" of Divine emotional expression, contrasted with the first three sephirot—the "Hidden World" of the Divine mind. The Talmud relates the six days when God actively creates to the 6,000 years of creation in the traditional Jewish calendar, with the seventh day corresponding to the messianic era, a 1,000-year Sabbath of rest. The Zohar, the central text of Kabbalah (disseminated in the 13th–15th centuries CE), commenting on Genesis 7:11, relates that in the 600th year of the sixth millennium the floodgates of wisdom above and below will open to prepare the world for the messianic age. Within the pre-messianic sixth millennium, individual gates of the "fifty gates of wisdom" will open sequentially; but from the 600th year (traditionally associated with 1840 CE) all gates will be open, enabling cumulative discoveries of both higher and lower wisdoms, which will flood the world and prepare it for the revelation of Absolute Divine Unity in the seventh millennium. This culminates in the last generation before the messiah, when "even young children will know the secrets of the Torah."
The concepts and structures of Jewish Kabbalah have been applied in the contemporary world to foster comparative dialogue with the modern sciences and humanities. This interaction is uncommon because it requires deep knowledge of both traditional Kabbalah and secular disciplines, and because Jewish modernity has often been marked by isolation and entrenchment between the two. Authorities engaging in this dialogue range from traditional Orthodox teachers of Kabbalah to Neo‑Kabbalistic and academic scholars who read Kabbalah critically and universally. Traditional Kabbalistic views regard the material world and the "lower wisdoms" it produces ambivalently, in contrast to the "higher wisdom" of Torah: the physical realm is seen as dominated by impurity and hidden divinity, yet the messianic aim entails uniting lower and higher wisdoms as a prerequisite for redemption and the full revelation of God. Over generations, the sparks of divinity within lower wisdoms are clarified as the sciences and humanities mature and ascend toward an eschatological union with higher divine wisdoms; still, because they arise from a world of plurality, the sciences and humanities represent partial perspectives divergent from Kabbalah's unified divine view. Kabbalists overcome rigid dogmas by recognizing internal paradoxes, self-limitations, and shared perspectives. Concomitantly, the higher wisdom of Kabbalah progressively descends and becomes increasingly revealed, clarified by analogies from the lower wisdoms. The marriage of the two heals the pre-messianic division of "waters" (wisdoms) expressed in Genesis one:seven: "And God made the firmament, dividing the waters which were under the firmament from the waters which were above the firmament." Traditional separatist Haredi followers of Kabbalah view engagement with secular thought as dangerous for the unqualified, but potentially redemptive for sages capable of clarifying unity, especially with the exact sciences. A productive dialogue between Kabbalah and secular knowledge has become possible with modern and postmodern developments in the sciences and humanities. However, humanities-based historical criticism in religious studies poses the main challenge to traditionalist views of revelation and to the formation of modern Jewish denominations. In contrast, neo-Kabbalistic approaches welcome views of revelation compatible with critical perspectives within Modernist or Open Orthodoxy, Jewish feminism, and non-Orthodox Jewish movements. On the eve of modernity, the Vilna Gaon (eighteenth century) foresaw a messianic Kabbalistic redemption of the sciences. In the early twentieth century, Abraham Isaac Kook described a mystical process in which the secular unconsciously deepens the sacred. The present generation has seen a proliferation of syntheses between Kabbalah and secular wisdoms, driven by Jewish outreach, traditional and Neo-Hasidic spirituality, the publication of esoteric mystical works, deep engagement with both Jewish and secular cultures, revisionist ideas compatible with mysticism, and a contemporary flourishing of scholarship and new perspectives in Jewish mystical studies. Kabbalistic attitudes toward secular knowledge have ranged from harmonization to opposition. Traditionalist Kabbalah and its development in Hasidic Judaism often viewed secular wisdoms negatively. Although some historical Kabbalists were learned in medieval Jewish philosophy and occasionally in mathematics and the sciences, their relationship to medieval Jewish philosophy—which itself was built on ancient Greek science and cosmology—was ambiguous. The spread of Kabbalah from the twelfth century onward was partly a response to the rationalist influence of Maimonides amid controversies over his teachings. Nonetheless, philosophical terminology from Jewish philosophy, both Neoplatonic and Aristotelian, permeated Kabbalistic systems and was reinterpreted mystically. The Kabbalistic dictum "Kabbalah begins where philosophy ends" asserted claims to superior knowledge but can also be read as an acknowledgment of the foundations laid by Jewish philosophy. Kabbalists claimed to see further, offering mythological and psychological answers to philosophical questions while standing on the shoulders of philosophy. Although opposed to dogmatic rationalism, some mystics—such as the systematizer Moses Cordovero (sixteenth century)—acknowledged and drew on Maimonides' philosophical purification of Jewish theology, which removed mistaken corporeal interpretations. In Cordovero's dialectical method, imagination was used to grasp and then reject anthropomorphic conceptions in Kabbalah. Judah Loew ben Bezalel (sixteenth century) expressed mystical ideas using the philosophical and scientific terminology of his day, valuing the natural sciences only insofar as they were subordinate to revelation. Kabbalistic views on secular studies were shaped by both mystical convictions and social context. Shneur Zalman of Liadi (eighteenth century) warned that impure secular wisdom can endanger ordinary faith, yet taught that great sages—such as the philosophical Maimonides (twelfth century) and the mystical Nachmanides (thirteenth century)—can discern the concealed divinity in secular knowledge and clarify its unity with the Torah, revealing new esoteric dimensions. Pre-Messianic Kabbalah, for example Nachmanides' commentary on the Torah, relates the seven days of Creation in Genesis one to the seven lower sephirot—Divine attributes from Chesed to Malkhut. These comprise the "Revealed World" of Divine emotional expression, contrasted with the first three sephirot—the "Hidden World" of the Divine mind. The Talmud relates the six days when God actively creates to the six thousand years of creation in the traditional Jewish calendar, with the seventh day corresponding to the messianic era, a one thousand-year Sabbath of rest. The Zohar, the central text of Kabbalah (disseminated in the thirteenth–fifteenth centuries CE), commenting on Genesis seven colon eleven, relates that in the six hundredth year of the sixth millennium the floodgates of wisdom above and below will open to prepare the world for the messianic age. Within the pre-messianic sixth millennium, individual gates of the "fifty gates of wisdom" will open sequentially; but from the six hundredth year (traditionally associated with one thousand eight hundred forty CE) all gates will be open, enabling cumulative discoveries of both higher and lower wisdoms, which will flood the world and prepare it for the revelation of Absolute Divine Unity in the seventh millennium. This culminates in the last generation before the messiah, when "even young children will know the secrets of the Torah."
long_en_346
poet_en
1,022
en
My name is Charles Anson. I moved in with my mother in her apartment after my father died. At first I hated the idea of living with her at the age of thirty-seven, but soon I got used to it and thought of her home as my home. I have to admit my life was easier than when I had my own place. My mother had a cook and a housekeeper, so I no longer had to buy my own food, cook it, or do any housecleaning, which I was never very good at anyway. My mother didn't give birth to me until her mid-forties, so to me she seemed old before her time. She had developed a bad heart in the years after my father's death and told me she was happy to have me there with her—I was her only family that counted, she said—even though we argued quite a lot at times about my drinking habits and the late hours I sometimes kept. My mother had a bad temper, which my father could have told you about if he had still been alive. I remember when I was little and heard them fighting late at night. It wasn't unusual to hear glass breaking or wood splintering. When my father got enough of being goaded, he would end up breaking something. In the morning when I asked about whatever it was that got broken, my mother would laugh and say my father had a little accident while sleepwalking. I knew it wasn't the truth, but it was a good way to gloss over an ugly situation. I went to work every day, and when I came home my mother was there, dinner on the table, and all was well. After dinner, I would usually step out if I felt like it, and I see now that my mother was a little jealous that I didn't spend all my time with her when I wasn't working. She watched movies on television and she was always happy to have me watch with her, but it wasn't my idea of a good time. I could only take so many Bette Davis or Joan Crawford movies. Most of the time when I came home from a night on the town, sometimes at one or two in the morning, my mother would have all the lights on in the place and the TV on, but she would have retired to her room. She said this made her feel safer when she was alone. I would turn everything off, starting with the TV, make my way to bed, sleep for about four hours, get up, and begin my day all over again, as so many of us working stiffs do. My mother had told me I didn't even need to work, that she had plenty of money for us both to live on, but I couldn't see myself hanging around all day with just her to talk to and having to ask her for money anytime I wanted to go out and have a few drinks. On weekends I always tried to spend either Saturday or Sunday with my mother, just the two of us. She liked to go for a drive and I would very often take her to the cemetery where my father was and take her to a hamburger place for lunch. If it was a Sunday, we would try to take in a museum or a concert. If we ever went to a movie, she always said she preferred seeing movies on TV, and when I told her most people who liked movies wanted to see them at the theatre and not on TV, she only shook her head as if she didn't understand. All in all, my life was agreeable. I didn't spend most of the money I made so I was able to invest. The market was doing well, so I did well. I didn't miss the things I didn't have that other people had, like a marriage and children. I had learned early in life that not everybody in the world is the same and I found it out more and more as I got older. My mother went on for years with her bad heart, but she came to a point where she couldn't go on any longer. She looked pale and drawn all the time and spent most of her time lying down. She stopped fixing herself up and having her hair done up. Most days she didn't even bother to get dressed. She went to the hospital for a few days and when she came home she swore she would never go back, no matter what. She wanted to be in the privacy of her own home and not have a bunch of strangers around her at the end. I hired a nurse to be with her during the day when I was at work and another at night. They just did their work quietly and effectively and didn't bother me. I paid them when the time came and left them to do whatever needed to be done. I decided to quit my job in early summer. I didn't need to work, as I said before, and all the time I was away I worried that the end would come for my mother and I wouldn't be there when she needed me. I dismissed both nurses and told them I would take over. My mother moved into one of the guest bedrooms—she didn't want to mess up her own room where all her treasures were—and became entirely bedridden. Her doctor sympathized with her desire to be at home and gave me lots of pills to give her. He told me I didn't have to hold back in administering her medicine and nobody would ever know the difference. I understood what he meant without it being explained. We kept her heavily sedated, and I knew she wasn't in any pain.
My name is Charles Anson. I moved in with my mother in her apartment after my father died. At first I hated the idea of living with her at the age of thirty-seven, but soon I got used to it and thought of her home as my home. I have to admit my life was easier than when I had my own place. My mother had a cook and a housekeeper, so I no longer had to buy my own food, cook it, or do any housecleaning, which I was never very good at anyway. My mother didn't give birth to me until her mid-forties, so to me she seemed old before her time. She had developed a bad heart in the years after my father's death and told me she was happy to have me there with her—I was her only family that counted, she said—even though we argued quite a lot at times about my drinking habits and the late hours I sometimes kept. My mother had a bad temper, which my father could have told you about if he had still been alive. I remember when I was little and heard them fighting late at night. It wasn't unusual to hear glass breaking or wood splintering. When my father got enough of being goaded, he would end up breaking something. In the morning when I asked about whatever it was that got broken, my mother would laugh and say my father had a little accident while sleepwalking. I knew it wasn't the truth, but it was a good way to gloss over an ugly situation. I went to work every day, and when I came home my mother was there, dinner on the table, and all was well. After dinner, I would usually step out if I felt like it, and I see now that my mother was a little jealous that I didn't spend all my time with her when I wasn't working. She watched movies on television and she was always happy to have me watch with her, but it wasn't my idea of a good time. I could only take so many Bette Davis or Joan Crawford movies. Most of the time when I came home from a night on the town, sometimes at one or two in the morning, my mother would have all the lights on in the place and the TV on, but she would have retired to her room. She said this made her feel safer when she was alone. I would turn everything off, starting with the TV, make my way to bed, sleep for about four hours, get up, and begin my day all over again, as so many of us working stiffs do. My mother had told me I didn't even need to work, that she had plenty of money for us both to live on, but I couldn't see myself hanging around all day with just her to talk to and having to ask her for money anytime I wanted to go out and have a few drinks. On weekends I always tried to spend either Saturday or Sunday with my mother, just the two of us. She liked to go for a drive and I would very often take her to the cemetery where my father was and take her to a hamburger place for lunch. If it was a Sunday, we would try to take in a museum or a concert. If we ever went to a movie, she always said she preferred seeing movies on TV, and when I told her most people who liked movies wanted to see them at the theatre and not on TV, she only shook her head as if she didn't understand. All in all, my life was agreeable. I didn't spend most of the money I made so I was able to invest. The market was doing well, so I did well. I didn't miss the things I didn't have that other people had, like a marriage and children. I had learned early in life that not everybody in the world is the same and I found it out more and more as I got older. My mother went on for years with her bad heart, but she came to a point where she couldn't go on any longer. She looked pale and drawn all the time and spent most of her time lying down. She stopped fixing herself up and having her hair done up. Most days she didn't even bother to get dressed. She went to the hospital for a few days and when she came home she swore she would never go back, no matter what. She wanted to be in the privacy of her own home and not have a bunch of strangers around her at the end. I hired a nurse to be with her during the day when I was at work and another at night. They just did their work quietly and effectively and didn't bother me. I paid them when the time came and left them to do whatever needed to be done. I decided to quit my job in early summer. I didn't need to work, as I said before, and all the time I was away I worried that the end would come for my mother and I wouldn't be there when she needed me. I dismissed both nurses and told them I would take over. My mother moved into one of the guest bedrooms—she didn't want to mess up her own room where all her treasures were—and became entirely bedridden. Her doctor sympathized with her desire to be at home and gave me lots of pills to give her. He told me I didn't have to hold back in administering her medicine and nobody would ever know the difference. I understood what he meant without it being explained. We kept her heavily sedated, and I knew she wasn't in any pain.
long_en_335
poet_en
934
en
Arash had just finished bathing the kids and feeding them when he sank onto his favorite spot, exhausted. He was eagerly waiting for Hana's messages. His eyelids grew heavy as sleepiness assaulted him, but he fought to stay awake. Today he felt more tired than usual: their normally independent eldest had been clingy—she wanted to be spoon-fed too, although she ate fine at kindergarten. It did not help that the little one had been cranky all day. No amount of coaxing made her feel better; she kept throwing tantrums during bathing, feeding, and dressing. It had been a real struggle to put on her diaper. He was glad he had managed to stop himself from giving in to his anger and slapping her. Arash had thought that by now the child would have become accustomed to their mother's absence, but how wrong he had been. He hadn't had a good night's sleep since Hana left. It had started to take a toll on his work performance—he kept making mistake after mistake while drafting several proposals. Thankfully the deadlines were still far off, so he had time to amend them. Arash sighed deeply. He thought she hadn't texted him since yesterday; most likely she had forgotten again or fallen asleep too soon. He hoped she would spend more time with him today. He missed Hana dearly. He had never felt so alone. He had courted her since their school days. Their relationship evolved from arrogant disregard to head-over-heels infatuation, then to deep loyalty and passion. He had worked tirelessly for years to win her. Hana was the sun of his life, brightening it with her fiery passion; he was the rain for her sensitive heart, soothing it when it threatened to combust. Together they complemented each other completely—two pieces of the same whole. He had no friends outside work; she was and remained his best friend. He gladly skipped networking with colleagues over mamak and futsal so he could spend his limited time at home with the love of his life and their mischievous daughters. Like other young parents, their time to indulge in their shared passion for anime and manga dwindled to almost nothing. Arash couldn't even remember the last title he'd watched with her. Occasionally they managed to squeeze in fleeting romantic conversations after the kids were finally asleep. Arash waited, wallowing in memories of his wife's laughter and her excited banter about buying furry plushies. He would text his wife, "How's your day, Ayang?" followed by "I miss you," and an hour later, "Are you alright?" He had left the kids playing in the living room, the coffee table moved aside to avoid accidents. They played and fought, made up and fought again, while he waited absentmindedly with a deep longing in his heart. Feeling bitter, he opened the Webnovel app and read a few new chapters without focusing on them. Seeing young protagonists overcome challenges in a new world only reminded him of his wife, who was stuck in a strange, unknown forest. He thought, "Where are you, my love? I need you." He felt deeply sorrowful. His wife was the opposite of those young heroes; he still couldn't grasp how it could happen. If it were not for watching Spirited Away and reading transmigration stories, he wouldn't even have entertained the notion. Nonetheless, as a responsible, grounded person, he refused to be swayed by random, unfounded thoughts that had no proof. He had even rebuked his wife over the issue, which left a bad taste in his mouth. In recent days he filled his waking hours making tutorial after tutorial that he hoped would help her. It seemed she would stay in that unknown place for some time, since he had no idea where the location on the map she had painstakingly sent was. She even said a bird had drawn the map for her! He doubted the map's reliability but did not bring it up; he didn't want to sour their conversations again. He had passed the map to Hadi without telling his brother-in-law who had drawn the rough map on the dirt floor. As a digital analyst in the police department, his brother-in-law was better placed to trace Hana's whereabouts than her useless husband. Who knows—he might find some important clues. Honestly, his conviction had begun to waver. A snake that can charge. A fox with an antler. A bird that can draw a map. Don't tell me the next thing will be a flying fish, he thought bitterly. Within a week his wife had adopted so many new strays just because they were cute. What world-shaking bombshell would she reveal next? He felt both eager and uneasy. If only he were beside her now. He had never thought his wife delusional or capable of lying to him. "Papa, Papa, she bit my nose!" the teary-faced girl cried, jumping into his lap and bringing him out of his reverie. Arash gently massaged her nose and chanted, "Painnnn, buang! Painnnn, buang!" He made a throwing motion, as if tossing the pain away, after blowing on her red nose. "There—better now, right?" The little girl nodded; her aggrieved expression eased. The little one cried and rolled from side to side, throwing a tantrum. She was upset because Arash was purposely ignoring her. It was a silent punishment for the naughty little girl, who was always heavy-handed with her big sister.
Arash had just finished bathing the kids and feeding them when he sank onto his favorite spot, exhausted. He was eagerly waiting for Hana's messages. His eyelids grew heavy as sleepiness assaulted him, but he fought to stay awake. Today he felt more tired than usual: their normally independent eldest had been clingy—she wanted to be spoon-fed too, although she ate fine at kindergarten. It did not help that the little one had been cranky all day. No amount of coaxing made her feel better; she kept throwing tantrums during bathing, feeding, and dressing. It had been a real struggle to put on her diaper. He was glad he had managed to stop himself from giving in to his anger and slapping her. Arash had thought that by now the child would have become accustomed to their mother's absence, but how wrong he had been. He hadn't had a good night's sleep since Hana left. It had started to take a toll on his work performance—he kept making mistake after mistake while drafting several proposals. Thankfully the deadlines were still far off, so he had time to amend them. Arash sighed deeply. He thought she hadn't texted him since yesterday; most likely she had forgotten again or fallen asleep too soon. He hoped she would spend more time with him today. He missed Hana dearly. He had never felt so alone. He had courted her since their school days. Their relationship evolved from arrogant disregard to head-over-heels infatuation, then to deep loyalty and passion. He had worked tirelessly for years to win her. Hana was the sun of his life, brightening it with her fiery passion; he was the rain for her sensitive heart, soothing it when it threatened to combust. Together they complemented each other completely—two pieces of the same whole. He had no friends outside work; she was and remained his best friend. He gladly skipped networking with colleagues over mamak and futsal so he could spend his limited time at home with the love of his life and their mischievous daughters. Like other young parents, their time to indulge in their shared passion for anime and manga dwindled to almost nothing. Arash couldn't even remember the last title he'd watched with her. Occasionally they managed to squeeze in fleeting romantic conversations after the kids were finally asleep. Arash waited, wallowing in memories of his wife's laughter and her excited banter about buying furry plushies. He would text his wife, "How's your day, Ayang?" followed by "I miss you," and an hour later, "Are you alright?" He had left the kids playing in the living room, the coffee table moved aside to avoid accidents. They played and fought, made up and fought again, while he waited absentmindedly with a deep longing in his heart. Feeling bitter, he opened the Webnovel app and read a few new chapters without focusing on them. Seeing young protagonists overcome challenges in a new world only reminded him of his wife, who was stuck in a strange, unknown forest. He thought, "Where are you, my love? I need you." He felt deeply sorrowful. His wife was the opposite of those young heroes; he still couldn't grasp how it could happen. If it were not for watching Spirited Away and reading transmigration stories, he wouldn't even have entertained the notion. Nonetheless, as a responsible, grounded person, he refused to be swayed by random, unfounded thoughts that had no proof. He had even rebuked his wife over the issue, which left a bad taste in his mouth. In recent days he filled his waking hours making tutorial after tutorial that he hoped would help her. It seemed she would stay in that unknown place for some time, since he had no idea where the location on the map she had painstakingly sent was. She even said a bird had drawn the map for her! He doubted the map's reliability but did not bring it up; he didn't want to sour their conversations again. He had passed the map to Hadi without telling his brother-in-law who had drawn the rough map on the dirt floor. As a digital analyst in the police department, his brother-in-law was better placed to trace Hana's whereabouts than her useless husband. Who knows—he might find some important clues. Honestly, his conviction had begun to waver. A snake that can charge. A fox with an antler. A bird that can draw a map. Don't tell me the next thing will be a flying fish, he thought bitterly. Within a week his wife had adopted so many new strays just because they were cute. What world-shaking bombshell would she reveal next? He felt both eager and uneasy. If only he were beside her now. He had never thought his wife delusional or capable of lying to him. "Papa, Papa, she bit my nose!" the teary-faced girl cried, jumping into his lap and bringing him out of his reverie. Arash gently massaged her nose and chanted, "Painnnn, buang! Painnnn, buang!" He made a throwing motion, as if tossing the pain away, after blowing on her red nose. "There—better now, right?" The little girl nodded; her aggrieved expression eased. The little one cried and rolled from side to side, throwing a tantrum. She was upset because Arash was purposely ignoring her. It was a silent punishment for the naughty little girl, who was always heavy-handed with her big sister.
long_en_313
wiki_en
877
en
Environmental risk transition is the process by which traditional communities with associated environmental health issues become more economically developed and experience new health issues. In traditional or economically undeveloped regions, people often suffer and die from infectious diseases or malnutrition due to poor food, water, and air quality. As economic development occurs, these environmental problems are reduced or solved, and others begin to arise. This leads to a shift in the nature of environmental hazards and, consequently, in the causes of death and disease. Risk transition framework: Several transition frameworks have been developed to understand the impacts of socioeconomic development. The oldest and best-known is the demographic transition framework, established in the 1940s; it describes the shift from high fertility and high mortality in underdeveloped societies to lower fertility and mortality rates as development proceeds. Around 1970 the epidemiological transition framework was introduced to characterize changes in population health during development. To categorize causes of death and disease more clearly when studying epidemiological shifts, the following categories were created: I. Traditional: infectious, nutritional, perinatal, and maternal causes. II. Modern: cancer, heart disease, neuropsychiatric disorders, chronic lung disease, diabetes, and congenital causes. In 1990, environmental health researcher Kirk R. Smith at the University of California, Berkeley proposed the "risk transition" framework in relation to the demographic and epidemiological transition frameworks. The theory holds that shifts in risk factors precede shifts in causes of death and disease. To emphasize prevention rather than response, the risk transition was further studied and quantified. Figure 1 illustrates how risk, epidemiological, and demographic transitions interact: changes in risk factors alter disease patterns and population health, which in turn affect demographics; demographic shifts also influence risk factors, so the three frameworks mutually impact one another. Smith later developed the "environmental risk transition" framework, which categorizes risks as traditional or modern and considers spatial dimensions. According to this framework, in early stages of development, environmental health problems concentrated in households—such as poor sanitation—are resolved but shifted into the community, producing issues such as urban pollution; these are examples of traditional and modern risks, respectively. As development proceeds, community-level risks decrease while risks shift to the global environment, creating concerns such as increased greenhouse gas emissions. Unlike traditional risks, modern risks tend to accumulate over time and often lack single identifiable causes. These categories were defined because most environmental risks associated with Category I diseases initially appeared at the household level; as development addressed these risks, environmental causes of Category II diseases became more significant at the community level. This concept is illustrated in Figure 2. Quantifying: Using data from the Global Burden of Disease Study (GBD) and the Comparative Risk Assessment (CRA) managed by the World Health Organization (WHO), empirical data were gathered to test the environmental risk transition framework. Measuring development: Common metrics of development include income per capita adjusted for purchasing power in 2000 (dollars of gross domestic product per capita, adjusted for purchasing power, $PPP/capita). The Human Development Index (HDI) combines purchasing-power-adjusted income per capita, life expectancy, and education level (in 2000) to measure development in a population. Risk metric: Risks were quantified as the percentage of the total disease burden (measured in disability-adjusted life years, DALYs) and as burden per capita (DALYs per 1,000 population). Household: The following summarizes the three major environmental risks in households with young children, who are most at risk: - Poor water, sanitation, and hygiene, which significantly contribute to diarrheal disease. - Use of solid fuels (biomass or coal) for cooking and heating, which emits pollutants that contribute to acute lower respiratory infections, chronic obstructive pulmonary disease, and lung cancer. - Inadequate household measures—such as lack of screening, pesticides, and bed nets—which contribute to a significant malaria burden. As development increases—measured by gross domestic product per capita adjusted for purchasing power (PPP, $PPP/capita)—household environmental risks decline considerably. Comparing poor and rich countries by $PPP/capita, household risks decrease by more than two orders of magnitude. Community: The major environmental risks at the community level include: - Urban outdoor air pollution. - Lead pollution from gasoline and industrial sources. - Occupational risks, including exposure to carcinogens, injuries, noise, and poor ergonomics. - Traffic accidents affecting drivers and pedestrians. There are more fluctuations in community risks with development than in household risks; the pattern is more complex and less pronounced. However, traffic accidents and air pollution generally increase with development. Lead and occupational risks, however, varied significantly. The WHO Comparative Risk Assessment (CRA) analyzed only one global environmental risk: climate change, which has had a relatively small impact on human health to date. No data were collected for other global risks, such as ozone depletion and land-use change. Exposure to climate change is expected to increase, which may amplify risks such as malaria. Results show that climate change–related risks decrease with higher levels of development, indicating that poorer populations are more susceptible to the diseases affected by climate change. This finding contradicts the environmental transition framework because the analysis considers where risks are experienced rather than where they originate. Nevertheless, as development proceeds, the global extent of climate change risk is likely to expand. Limitations: Many important environmental health risks could not be addressed or analyzed in this study.
Environmental risk transition is the process by which traditional communities with associated environmental health issues become more economically developed and experience new health issues. In traditional or economically undeveloped regions, people often suffer and die from infectious diseases or malnutrition due to poor food, water, and air quality. As economic development occurs, these environmental problems are reduced or solved, and others begin to arise. This leads to a shift in the nature of environmental hazards and, consequently, in the causes of death and disease. Risk transition framework: Several transition frameworks have been developed to understand the impacts of socioeconomic development. The oldest and best-known is the demographic transition framework, established in the nineteen forties; it describes the shift from high fertility and high mortality in underdeveloped societies to lower fertility and mortality rates as development proceeds. Around nineteen seventy the epidemiological transition framework was introduced to characterize changes in population health during development. To categorize causes of death and disease more clearly when studying epidemiological shifts, the following categories were created: one. Traditional: infectious, nutritional, perinatal, and maternal causes. two. Modern: cancer, heart disease, neuropsychiatric disorders, chronic lung disease, diabetes, and congenital causes. In nineteen ninety, environmental health researcher Kirk R. Smith at the University of California, Berkeley proposed the "risk transition" framework in relation to the demographic and epidemiological transition frameworks. The theory holds that shifts in risk factors precede shifts in causes of death and disease. To emphasize prevention rather than response, the risk transition was further studied and quantified. Figure one illustrates how risk, epidemiological, and demographic transitions interact: changes in risk factors alter disease patterns and population health, which in turn affect demographics; demographic shifts also influence risk factors, so the three frameworks mutually impact one another. Smith later developed the "environmental risk transition" framework, which categorizes risks as traditional or modern and considers spatial dimensions. According to this framework, in early stages of development, environmental health problems concentrated in households—such as poor sanitation—are resolved but shifted into the community, producing issues such as urban pollution; these are examples of traditional and modern risks, respectively. As development proceeds, community-level risks decrease while risks shift to the global environment, creating concerns such as increased greenhouse gas emissions. Unlike traditional risks, modern risks tend to accumulate over time and often lack single identifiable causes. These categories were defined because most environmental risks associated with Category one diseases initially appeared at the household level; as development addressed these risks, environmental causes of Category two diseases became more significant at the community level. This concept is illustrated in Figure two. Quantifying: Using data from the Global Burden of Disease Study (GBD) and the Comparative Risk Assessment (CRA) managed by the World Health Organization (WHO), empirical data were gathered to test the environmental risk transition framework. Measuring development: Common metrics of development include income per capita adjusted for purchasing power in two thousand (dollars of gross domestic product per capita, adjusted for purchasing power, dollars PPP per capita). The Human Development Index (HDI) combines purchasing-power-adjusted income per capita, life expectancy, and education level (in two thousand) to measure development in a population. Risk metric: Risks were quantified as the percentage of the total disease burden (measured in disability-adjusted life years, DALYs) and as burden per capita (DALYs per one thousand population). Household: The following summarizes the three major environmental risks in households with young children, who are most at risk: - Poor water, sanitation, and hygiene, which significantly contribute to diarrheal disease. - Use of solid fuels (biomass or coal) for cooking and heating, which emits pollutants that contribute to acute lower respiratory infections, chronic obstructive pulmonary disease, and lung cancer. - Inadequate household measures—such as lack of screening, pesticides, and bed nets—which contribute to a significant malaria burden. As development increases—measured by gross domestic product per capita adjusted for purchasing power (PPP, dollar PPP per capita)—household environmental risks decline considerably. Comparing poor and rich countries by dollar PPP per capita, household risks decrease by more than two orders of magnitude. Community: The major environmental risks at the community level include: - Urban outdoor air pollution. - Lead pollution from gasoline and industrial sources. - Occupational risks, including exposure to carcinogens, injuries, noise, and poor ergonomics. - Traffic accidents affecting drivers and pedestrians. There are more fluctuations in community risks with development than in household risks; the pattern is more complex and less pronounced. However, traffic accidents and air pollution generally increase with development. Lead and occupational risks, however, varied significantly. The WHO Comparative Risk Assessment (CRA) analyzed only one global environmental risk: climate change, which has had a relatively small impact on human health to date. No data were collected for other global risks, such as ozone depletion and land-use change. Exposure to climate change is expected to increase, which may amplify risks such as malaria. Results show that climate change–related risks decrease with higher levels of development, indicating that poorer populations are more susceptible to the diseases affected by climate change. This finding contradicts the environmental transition framework because the analysis considers where risks are experienced rather than where they originate. Nevertheless, as development proceeds, the global extent of climate change risk is likely to expand. Limitations: Many important environmental health risks could not be addressed or analyzed in this study.
long_en_363
poet_en
707
en
Back into the closet. Well, maybe you guessed it from this heading. If you're following my blog, you'll know that I (sometimes we) spent the entire weekend cleaning out the bedroom closet. Today we had cleaners over because we're getting ready to show the house early to our neighbor's realtor, since our neighbor wants to buy our house (but so far hasn't offered us anything for it). I got home expecting to see a sparkling, neat house, as my husband had called me several times during the day to ask about the disposition of some items. The girls weren't finished. Yes, our house was so dirty—despite being professionally cleaned two weeks ago by a different company—that the girls spent five hours at our house yesterday and still weren't done. That was embarrassing but understandable, because since we started clearing the house out, I have stopped trying to clean, too. But later I went into our closet, and maybe you can guess what I saw. Yes, indeed, the closet that I spent two days cleaning out was full of boxes. He did it again. I tried to keep my mouth shut, but finally I pointed out that I had repeatedly asked him not to fill up places that we had already cleared out, because the end effect was that we weren't making any progress, and even if we were, we would feel like we weren't. We ended up having a big fight (unusual for us), during which he accused me of saying the same things all the time, and I told him that was because I thought if I said them enough times he might listen. That sounds horrible, I know, but he also has a memory problem, so it's hard to know when I have to repeat myself. Then he pretty much admitted to me that if I asked him to do something, he was not going to do it; I had already suspected this was going on. Since he also refuses most of the time to have a calm talk about what our next steps should be, and he isn't going to do what I ask him to do, I have to admit to being totally stymied. Do I just let him work at whatever, even though it may not make sense at the time and may create more mess? For example, right now I am working on removing clutter, but he has decided to start removing furniture from the house, sometimes creating more clutter. He has taken months to motivate to work at all, so I feel like if I say nothing, he will just relapse into inactivity. Author: whatmeread. Posted on August 25, 2016. Categories: hoarding, moving. Tags: junk, moving, packing. 6 comments. Back into the Closet — Out of the Closet This weekend's project was to clear out our bedroom closet. The state this closet was in was as much my fault as my husband's. We have far too many things in it. But it also got out of hand because of my husband's propensity for organization run amok. We don't have any linen closets. When I was single, I kept my sheets on the top closet shelf and the toilet paper in the bathroom cabinet. After my husband moved in, the bathroom cabinets got out of control and he installed shoe racks on the closet shelves. One on the left was handy because it actually held shoes, but the one on the right made it impossible to store sheets or anything else. The top shelf became the repository for toilet paper and tissues, which my husband buys in bulk—buying in bulk being one of our problems. Then Wayne put a rack of shelves at the back of the closet, and they became stuffed with sheets and towels. Eventually we ran out of room, and so many things were on the floor that I kept tripping every time I went in. On Saturday I removed everything from the top shelves and the floor. It took most of the day to sort through everything, packing items, putting donations in a bag for Goodwill, or throwing things out.
Back into the closet. Well, maybe you guessed it from this heading. If you're following my blog, you'll know that I (sometimes we) spent the entire weekend cleaning out the bedroom closet. Today we had cleaners over because we're getting ready to show the house early to our neighbor's realtor, since our neighbor wants to buy our house (but so far hasn't offered us anything for it). I got home expecting to see a sparkling, neat house, as my husband had called me several times during the day to ask about the disposition of some items. The girls weren't finished. Yes, our house was so dirty—despite being professionally cleaned two weeks ago by a different company—that the girls spent five hours at our house yesterday and still weren't done. That was embarrassing but understandable, because since we started clearing the house out, I have stopped trying to clean, too. But later I went into our closet, and maybe you can guess what I saw. Yes, indeed, the closet that I spent two days cleaning out was full of boxes. He did it again. I tried to keep my mouth shut, but finally I pointed out that I had repeatedly asked him not to fill up places that we had already cleared out, because the end effect was that we weren't making any progress, and even if we were, we would feel like we weren't. We ended up having a big fight (unusual for us), during which he accused me of saying the same things all the time, and I told him that was because I thought if I said them enough times he might listen. That sounds horrible, I know, but he also has a memory problem, so it's hard to know when I have to repeat myself. Then he pretty much admitted to me that if I asked him to do something, he was not going to do it; I had already suspected this was going on. Since he also refuses most of the time to have a calm talk about what our next steps should be, and he isn't going to do what I ask him to do, I have to admit to being totally stymied. Do I just let him work at whatever, even though it may not make sense at the time and may create more mess? For example, right now I am working on removing clutter, but he has decided to start removing furniture from the house, sometimes creating more clutter. He has taken months to motivate to work at all, so I feel like if I say nothing, he will just relapse into inactivity. Author: whatmeread. Posted on August twenty-five, two thousand sixteen. Categories: hoarding, moving. Tags: junk, moving, packing. six comments. Back into the Closet — Out of the Closet This weekend's project was to clear out our bedroom closet. The state this closet was in was as much my fault as my husband's. We have far too many things in it. But it also got out of hand because of my husband's propensity for organization run amok. We don't have any linen closets. When I was single, I kept my sheets on the top closet shelf and the toilet paper in the bathroom cabinet. After my husband moved in, the bathroom cabinets got out of control and he installed shoe racks on the closet shelves. One on the left was handy because it actually held shoes, but the one on the right made it impossible to store sheets or anything else. The top shelf became the repository for toilet paper and tissues, which my husband buys in bulk—buying in bulk being one of our problems. Then Wayne put a rack of shelves at the back of the closet, and they became stuffed with sheets and towels. Eventually we ran out of room, and so many things were on the floor that I kept tripping every time I went in. On Saturday I removed everything from the top shelves and the floor. It took most of the day to sort through everything, packing items, putting donations in a bag for Goodwill, or throwing things out.
long_en_296
wiki_en
971
en
Mwale Medical and Technology City (MMTC) is a community-owned sustainable metropolis in Butere Sub-county, Kakamega County, Kenya. It is centered on a large medical complex that includes the 5,000-patient-capacity Hamptons Hospital and a research and innovation park in the Plaza district. The city also has an industrial district anchored by a solar power plant; three other districts feature residential housing with a 36-hole golf course, a commercial shopping center with a mall, supermarkets, and hotels; and an airport district designed to evacuate patients to the hospital via a planned cable car. The project cost US$2 billion and is regarded as a model for new green cities worldwide, serving as a template for integrating and uplifting the local community and catalyzing regional economic growth. History A feasibility study conducted from 2007 to 2012 identified the site for a proposed medical tourism and technology city in western Kenya. Lead investor Julius Mwale, a Kenyan technology entrepreneur based in the United States, assembled a team of technology and healthcare experts and companies from the U.S. Phase 1 began in 2014 and was completed in 2016; it included a multi-billion shopping and residential complex, Hamptons Mall, with Mwal-mart Supermarket as the anchor. The mall hosts Mwal-mart Supermarket, Hamptons Café Bed and Breakfast, a showroom, and more than 90,000 square feet of private residences. Phase Two commenced in June 2016 and ran until September 2017. This included the first section of a 5,000-bed referral hospital, over 70 kilometers of roads, and more than 300 solar streetlights. Phase Two also included the initial phase of 4,800 homes expected to house doctors and nurses. The final phase, from September 2017 to December 2020 and beyond, included a 36-hole golf resort and residences, an airport with a second shopping mall, a convention center, and a water park connected to the hospital by a cable car. Planned features also include a medical school, a technology park, and a 144-megawatt gasification power plant. The project cost is estimated at US$2 billion. Since its initiation, MMTC has received widespread admiration from local and foreign visitors. The project's success has been attributed to its outreach to local communities, particularly the Marama clan of the Luhya, and to establishing key relationships. As a result, many have donated large tracts of ancestral land to benefit both the community and MMTC, and numerous families have gained newly constructed homes and rental units that house the growing number of project workers. MMTC has created employment for over 1,000 youths in the area. In return, the project has received key backing from these local stakeholders, which is said to have helped resist attempts by several politicians and Kakamega County Government officials to taint the project's image and suspend its construction over unexplained approval issues. In August 2018, the Kakamega County Government, led by Governor Wycliffe Oparanya, reached an agreement with lead investor Julius Mwale to set aside past grievances and facilitate the timely completion of this community-backed project. MMTC is situated in a tropical rainforest climate. The area enjoys stable year-round temperatures averaging 20.8 °C (69.4 °F), ranging from average lows of 19.7 °C (67.4 °F) in the coolest month (July) to average highs of 29.4 °C (84.8 °F) in the warmest month (February). Mornings are typically cooler and afternoons warmer, but temperatures rarely rise above 31.7 °C (89 °F) or drop below 12.8 °C (55 °F). Humidity is 56–79% year-round, generally below 69% from December through March and above 72% from April to November. There is frequent rainfall, averaging as low as 77 mm (3.0 in) in the driest month (February) and as high as 244 mm (9.6 in) in the wettest month (April). The best times to visit are late June to early October and late November to early March. Economy At the center of the local economy is Hamptons Hospital, which welcomes patients for the treatment of cancer and other ailments. Mwale Medical and Technology City (MMTC) also contains a multi-billion shilling shopping and residential complex. The city is powered by renewable energy, including thousands of solar-powered streetlights and a solar power plant expanding to 50 MW. A 36-hole golf resort and residences contain 1,500 rooms and 4,800 residences along the golf course. Other amenities include a planned water park, a convention center, and a second large mall at the airport. The city employs thousands of workers, making it one of the region’s largest employers. Plaza District The Plaza District is one of the five economic centers of Mwale Medical and Technology City. It is anchored by the 5,000-patient-capacity Hamptons Hospital, an innovation park, and commercial centers offering shopping, dining, hospitality, and residential homes. Hamptons Hospital Hamptons Hospital acquired Ksh 21 billion (US$200 million) worth of equipment in 2021, enabling it to become one of the world’s leading hospitals. It opened in July 2019 offering cancer treatment and subsequently added other departments. The hospital provides free treatment to the two million residents of Kakamega County with National Hospital Insurance Fund (NHIF) cards and is NHIF-accredited. Ambulatory services were launched to provide emergency care to residents in western Kenya. The hospital supports the community beyond providing treatment. It distributes free solar street lights to schools and across the county. One billion shillings (US$10 million) has been spent on a community lighting program in Kakamega County. The hospital is the western region reference center for COVID-19 treatment. It is expanding services to cover the Lake Victoria region with the introduction of a Hamptons floating hospital expected to serve an additional 32 million residents along the lake’s shores.
Mwale Medical and Technology City (MMTC) is a community-owned sustainable metropolis in Butere Sub-county, Kakamega County, Kenya. It is centered on a large medical complex that includes the five thousand-patient-capacity Hamptons Hospital and a research and innovation park in the Plaza district. The city also has an industrial district anchored by a solar power plant; three other districts feature residential housing with a thirty six-hole golf course, a commercial shopping center with a mall, supermarkets, and hotels; and an airport district designed to evacuate patients to the hospital via a planned cable car. The project cost US dollar two billion and is regarded as a model for new green cities worldwide, serving as a template for integrating and uplifting the local community and catalyzing regional economic growth. History A feasibility study conducted from two thousand seven to two thousand twelve identified the site for a proposed medical tourism and technology city in western Kenya. Lead investor Julius Mwale, a Kenyan technology entrepreneur based in the United States, assembled a team of technology and healthcare experts and companies from the U.S. Phase one began in two thousand fourteen and was completed in two thousand sixteen; it included a multi-billion shopping and residential complex, Hamptons Mall, with Mwal-mart Supermarket as the anchor. The mall hosts Mwal-mart Supermarket, Hamptons Café Bed and Breakfast, a showroom, and more than ninety thousand square feet of private residences. Phase Two commenced in June two thousand sixteen and ran until September two thousand seventeen. This included the first section of a five thousand-bed referral hospital, over seventy kilometers of roads, and more than three hundred solar streetlights. Phase Two also included the initial phase of four thousand eight hundred homes expected to house doctors and nurses. The final phase, from September two thousand seventeen to December two thousand twenty and beyond, included a thirty-six-hole golf resort and residences, an airport with a second shopping mall, a convention center, and a water park connected to the hospital by a cable car. Planned features also include a medical school, a technology park, and a one hundred forty-four-megawatt gasification power plant. The project cost is estimated at US dollar two billion. Since its initiation, MMTC has received widespread admiration from local and foreign visitors. The project's success has been attributed to its outreach to local communities, particularly the Marama clan of the Luhya, and to establishing key relationships. As a result, many have donated large tracts of ancestral land to benefit both the community and MMTC, and numerous families have gained newly constructed homes and rental units that house the growing number of project workers. MMTC has created employment for over one thousand youths in the area. In return, the project has received key backing from these local stakeholders, which is said to have helped resist attempts by several politicians and Kakamega County Government officials to taint the project's image and suspend its construction over unexplained approval issues. In August two thousand eighteen, the Kakamega County Government, led by Governor Wycliffe Oparanya, reached an agreement with lead investor Julius Mwale to set aside past grievances and facilitate the timely completion of this community-backed project. MMTC is situated in a tropical rainforest climate. The area enjoys stable year-round temperatures averaging twenty point eight degrees Celsius (sixty-nine point four degrees Fahrenheit), ranging from average lows of nineteen point seven degrees Celsius (sixty-seven point four degrees Fahrenheit) in the coolest month (July) to average highs of twenty-nine point four degrees Celsius (eighty-four point eight degrees Fahrenheit) in the warmest month (February). Mornings are typically cooler and afternoons warmer, but temperatures rarely rise above thirty-one point seven degrees Celsius (eighty-nine degrees Fahrenheit) or drop below twelve point eight degrees Celsius (fifty-five degrees Fahrenheit). Humidity is fifty-six to seventy-nine percent year-round, generally below sixty-nine percent from December through March and above seventy-two percent from April to November. There is frequent rainfall, averaging as low as seventy-seven millimeters (three point zero inches) in the driest month (February) and as high as two hundred forty-four millimeters (nine point six inches) in the wettest month (April). The best times to visit are late June to early October and late November to early March. Economy At the center of the local economy is Hamptons Hospital, which welcomes patients for the treatment of cancer and other ailments. Mwale Medical and Technology City (MMTC) also contains a multi-billion shilling shopping and residential complex. The city is powered by renewable energy, including thousands of solar-powered streetlights and a solar power plant expanding to fifty megawatts. A thirty-six-hole golf resort and residences contain one thousand five hundred rooms and four thousand eight hundred residences along the golf course. Other amenities include a planned water park, a convention center, and a second large mall at the airport. The city employs thousands of workers, making it one of the region’s largest employers. Plaza District The Plaza District is one of the five economic centers of Mwale Medical and Technology City. It is anchored by the five thousand-patient-capacity Hamptons Hospital, an innovation park, and commercial centers offering shopping, dining, hospitality, and residential homes. Hamptons Hospital Hamptons Hospital acquired Ksh twenty one billion (US dollar two hundred million) worth of equipment in two thousand twenty one, enabling it to become one of the world’s leading hospitals. It opened in July two thousand nineteen offering cancer treatment and subsequently added other departments. The hospital provides free treatment to the two million residents of Kakamega County with National Hospital Insurance Fund (NHIF) cards and is NHIF-accredited. Ambulatory services were launched to provide emergency care to residents in western Kenya. The hospital supports the community beyond providing treatment. It distributes free solar street lights to schools and across the county. One billion shillings (US dollar ten million) has been spent on a community lighting program in Kakamega County. The hospital is the western region reference center for COVID-19 treatment. It is expanding services to cover the Lake Victoria region with the introduction of a Hamptons floating hospital expected to serve an additional thirty two million residents along the lake’s shores.
long_en_206
news_en
910
en
March 29 (Reuters) For other diaries, please see: Top Economic Events; Emerging Markets Economic Events; Government Debt Auctions; Political and General News; U.S. Federal Reserve. This diary is filed daily. THURSDAY, MARCH 29 New York — Federal Reserve Bank of Philadelphia President Patrick Harker speaks on the economic outlook before a New York Association of Business Economics luncheon — 1700 GMT. MONDAY, APRIL 2 Duluth, United States — Federal Reserve Bank of Minneapolis President Neel Kashkari speaks on the economy and monetary policy before a student town hall hosted by the University of Minnesota at Duluth — 2200 GMT. TUESDAY, APRIL 3 New York — Federal Reserve Board Governor Lael Brainard speaks on "Financial Stability" at an event hosted by the NYU Stern Center for Global Economy and Business — 2030 GMT. Madrid — Bank of Spain/IMF conference: "Spain: From Recovery to Resilience." Duluth, United States — Federal Reserve Bank of Minneapolis President Neel Kashkari participates in a moderated question-and-answer session at the Regional Economic Indicators Forum — 1330 GMT. WEDNESDAY, APRIL 4 Little Rock, Arkansas — Federal Reserve Bank of St. Louis President James Bullard makes a presentation at the Arkansas Bankers Association & Arkansas State Bank Department’s Day with the Commissioner — 1345 GMT. WILBERFORCE, Ohio - Federal Reserve Bank of Cleveland President Loretta Mester speaks on "Diversity in Economics" before the Central State University Leaders, Executives, Entrepreneurs and Directors (LEED) program - 1500 GMT. THURSDAY, APRIL 5 - SARASOTA, Fla. - Federal Reserve Bank of Atlanta President Raphael Bostic speaks on "Financial Literacy" before a Financial Literacy Day event hosted by the University of South Florida Sarasota-Manatee - 1700 GMT. THURSDAY, APRIL 5 - ZURICH, Switzerland - Alternate member of the Governing Board of the Swiss National Bank Dewet Moser gives a speech, "Yesterday and today: change in the money and foreign exchange market", Money Market Event - 1600 GMT. THURSDAY, APRIL 5 - ZURICH, Switzerland - Speech by Member of the Governing Board of the Swiss National Bank Andrea Maechler, "Heute und morgen: Ein Blick in die digitale Zukunft", Money Market Event - 1600 GMT. FRIDAY, APRIL 6 - CERNOBBIO, Italy - European Central Bank executive board member Benoit Coeure speaks at a conference about "The Outlook for the Economy and Finance" - 0645 GMT. FRIDAY, APRIL 6 - LONDON - Governor of Norges Bank Oystein Olsen will give a speech in London - 1210 GMT. TUESDAY, APRIL 10 - OSLO - Governor of Norges Bank Oystein Olsen will give a speech to foreign embassy representatives - 1300 GMT. WEDNESDAY, APRIL 11 - FRANKFURT, Germany - ECB Governing Council member Ardo Hansson speaks in Frankfurt - 1030 GMT. WEDNESDAY, APRIL 11 - FRANKFURT, Germany - ECB Governing Council meeting. No interest rate announcements scheduled. WASHINGTON, D.C. - U.S. Federal Reserve's Federal Open Market Committee (FOMC) will release minutes from its March 20-21 policy meeting - 18:00 GMT. THURSDAY, APRIL 12 STAVANGER, Norway - Norges Bank Governor Oystein Olsen and Deputy Governor Egil Matsen give speeches to the regional network, Region South-West, and lectures at the University of Stavanger - 07:30 GMT. OSLO - Norges Bank Deputy Governor Jon Nicolaisen speaks at the Norwegian Academy of Science and Letters - 15:30 GMT. FRIDAY, APRIL 13 ST. LOUIS - Federal Reserve Bank of St. Louis President James Bullard makes a presentation on "Living Standards across U.S. Metropolitan Statistical Areas" at Washington University's Calhoun Lecture Series - 13:00 GMT. TROMSO, Norway - Norges Bank Governor Oystein Olsen and Deputy Governor Egil Matsen give speeches to the regional network, Region North, in Tromso - 10:30 GMT. MONDAY, APRIL 16 STOCKHOLM - Riksbank executive board meeting - 07:00 GMT. TUESDAY, APRIL 17 MADRID - Federal Reserve Bank of San Francisco President John Williams speaks before the National Association for Business Economics (NABE)-Bank of Spain International Symposium, "Global Recovery: The Good, the Bad, and the Ugly" - 13:15 GMT. CHICAGO - Federal Reserve Bank of Chicago President Charles Evans speaks before a Chicago Rotary Club luncheon - 17:40 GMT. Wednesday, April 18 1400 GMT - Ottawa: Bank of Canada key policy interest rate announcement and monetary policy report. 1515 GMT - Ottawa: Bank of Canada Governor Stephen Poloz and Senior Deputy Governor Carolyn Wilkins hold a press conference to discuss the Monetary Policy Report. 1800 GMT - Washington, D.C.: U.S. Federal Reserve issues its Beige Book on economic conditions. Thursday, April 19 1330 GMT - Washington, D.C.: Federal Reserve Vice Chair for Supervision Randal Quarles testifies before the Senate Banking Committee hearing, "The Semiannual Testimony of the Federal Reserve’s Supervision and Regulation of the Financial System." 1600 GMT - Oslo: Governor of Norges Bank Oystein Olsen gives a speech for the financial markets association. Philadelphia, Pennsylvania - Norges Bank Deputy Governor Jon Nicolaisen gives a speech at the 36th Annual Monetary and Trade Conference: "Cryptocurrencies in the Global Economy." (time not specified) Wednesday, April 25 0700 GMT - Stockholm: Riksbank monetary policy meeting. Thursday, April 26 Frankfurt, Germany - ECB Governing Council meeting, followed by interest rate announcement. (time not specified) Frankfurt, Germany - ECB President Mario Draghi holds a press conference after the interest rate meeting. (time not specified) Tokyo - Bank of Japan Monetary Policy Meeting (to April 27). 0730 GMT - Stockholm: Riksbank interest rate decision and monetary policy report. Friday, April 27 — Bern, Switzerland: Speeches by Jean Studer, president of the Swiss National Bank Council, and Thomas Jordan, chairman of the Swiss National Bank, at the SNB’s General Meeting of Shareholders in Bern at 0800 GMT. Note: Inclusion of items in this diary does not necessarily mean Reuters will file a story based on the event. For technical issues, contact Thomson Reuters Customer Support (TRCS).
March twenty nine (Reuters) For other diaries, please see: Top Economic Events; Emerging Markets Economic Events; Government Debt Auctions; Political and General News; U.S. Federal Reserve. This diary is filed daily. THURSDAY, MARCH twenty nine New York — Federal Reserve Bank of Philadelphia President Patrick Harker speaks on the economic outlook before a New York Association of Business Economics luncheon — one seven zero zero GMT. MONDAY, APRIL two Duluth, United States — Federal Reserve Bank of Minneapolis President Neel Kashkari speaks on the economy and monetary policy before a student town hall hosted by the University of Minnesota at Duluth — two two zero zero GMT. TUESDAY, APRIL three New York — Federal Reserve Board Governor Lael Brainard speaks on "Financial Stability" at an event hosted by the NYU Stern Center for Global Economy and Business — two zero three zero GMT. Madrid — Bank of Spain/IMF conference: "Spain: From Recovery to Resilience." Duluth, United States — Federal Reserve Bank of Minneapolis President Neel Kashkari participates in a moderated question-and-answer session at the Regional Economic Indicators Forum — one three three zero GMT. WEDNESDAY, APRIL four Little Rock, Arkansas — Federal Reserve Bank of St. Louis President James Bullard makes a presentation at the Arkansas Bankers Association & Arkansas State Bank Department’s Day with the Commissioner — one three four five GMT. WILBERFORCE, Ohio - Federal Reserve Bank of Cleveland President Loretta Mester speaks on "Diversity in Economics" before the Central State University Leaders, Executives, Entrepreneurs and Directors (LEED) program - one thousand five hundred GMT. THURSDAY, APRIL five - SARASOTA, Fla. - Federal Reserve Bank of Atlanta President Raphael Bostic speaks on "Financial Literacy" before a Financial Literacy Day event hosted by the University of South Florida Sarasota-Manatee - one thousand seven hundred GMT. THURSDAY, APRIL five - ZURICH, Switzerland - Alternate member of the Governing Board of the Swiss National Bank Dewet Moser gives a speech, "Yesterday and today: change in the money and foreign exchange market", Money Market Event - one thousand six hundred GMT. THURSDAY, APRIL five - ZURICH, Switzerland - Speech by Member of the Governing Board of the Swiss National Bank Andrea Maechler, "Heute und morgen: Ein Blick in die digitale Zukunft", Money Market Event - one thousand six hundred GMT. FRIDAY, APRIL six - CERNOBBIO, Italy - European Central Bank executive board member Benoit Coeure speaks at a conference about "The Outlook for the Economy and Finance" - six hundred forty-five GMT. FRIDAY, APRIL six - LONDON - Governor of Norges Bank Oystein Olsen will give a speech in London - one thousand two hundred ten GMT. TUESDAY, APRIL ten - OSLO - Governor of Norges Bank Oystein Olsen will give a speech to foreign embassy representatives - one thousand three hundred GMT. WEDNESDAY, APRIL eleven - FRANKFURT, Germany - ECB Governing Council member Ardo Hansson speaks in Frankfurt - one thousand thirty GMT. WEDNESDAY, APRIL eleven - FRANKFURT, Germany - ECB Governing Council meeting. No interest rate announcements scheduled. WASHINGTON, D.C. - U.S. Federal Reserve's Federal Open Market Committee (FOMC) will release minutes from its March twenty to twenty one policy meeting - eighteen:zero zero GMT. THURSDAY, APRIL twelve STAVANGER, Norway - Norges Bank Governor Oystein Olsen and Deputy Governor Egil Matsen give speeches to the regional network, Region South-West, and lectures at the University of Stavanger - zero seven:thirty GMT. OSLO - Norges Bank Deputy Governor Jon Nicolaisen speaks at the Norwegian Academy of Science and Letters - fifteen:thirty GMT. FRIDAY, APRIL thirteen ST. LOUIS - Federal Reserve Bank of St. Louis President James Bullard makes a presentation on "Living Standards across U.S. Metropolitan Statistical Areas" at Washington University's Calhoun Lecture Series - thirteen:zero zero GMT. TROMSO, Norway - Norges Bank Governor Oystein Olsen and Deputy Governor Egil Matsen give speeches to the regional network, Region North, in Tromso - ten:thirty GMT. MONDAY, APRIL sixteen STOCKHOLM - Riksbank executive board meeting - zero seven:zero zero GMT. TUESDAY, APRIL seventeen MADRID - Federal Reserve Bank of San Francisco President John Williams speaks before the National Association for Business Economics (NABE)-Bank of Spain International Symposium, "Global Recovery: The Good, the Bad, and the Ugly" - thirteen:fifteen GMT. CHICAGO - Federal Reserve Bank of Chicago President Charles Evans speaks before a Chicago Rotary Club luncheon - seventeen:forty GMT. Wednesday, April eighteen one four zero zero GMT - Ottawa: Bank of Canada key policy interest rate announcement and monetary policy report. one five one five GMT - Ottawa: Bank of Canada Governor Stephen Poloz and Senior Deputy Governor Carolyn Wilkins hold a press conference to discuss the Monetary Policy Report. one eight zero zero GMT - Washington, D.C.: U.S. Federal Reserve issues its Beige Book on economic conditions. Thursday, April nineteen one three three zero GMT - Washington, D.C.: Federal Reserve Vice Chair for Supervision Randal Quarles testifies before the Senate Banking Committee hearing, "The Semiannual Testimony of the Federal Reserve’s Supervision and Regulation of the Financial System." one six zero zero GMT - Oslo: Governor of Norges Bank Oystein Olsen gives a speech for the financial markets association. Philadelphia, Pennsylvania - Norges Bank Deputy Governor Jon Nicolaisen gives a speech at the thirty sixth Annual Monetary and Trade Conference: "Cryptocurrencies in the Global Economy." (time not specified) Wednesday, April twenty five zero seven zero zero GMT - Stockholm: Riksbank monetary policy meeting. Thursday, April twenty six Frankfurt, Germany - ECB Governing Council meeting, followed by interest rate announcement. (time not specified) Frankfurt, Germany - ECB President Mario Draghi holds a press conference after the interest rate meeting. (time not specified) Tokyo - Bank of Japan Monetary Policy Meeting (to April twenty seven). zero seven three zero GMT - Stockholm: Riksbank interest rate decision and monetary policy report. Friday, April twenty seven — Bern, Switzerland: Speeches by Jean Studer, president of the Swiss National Bank Council, and Thomas Jordan, chairman of the Swiss National Bank, at the SNB’s General Meeting of Shareholders in Bern at zero eight hundred GMT. Note: Inclusion of items in this diary does not necessarily mean Reuters will file a story based on the event. For technical issues, contact Thomson Reuters Customer Support (TRCS).
long_en_369
poet_en
735
en
Good evening, or morning, or maybe you’re getting ready to sleep—doesn’t matter. Hello, it’s Becca. I’ve been busy, haven’t I? I met a Dragon God and gained a new party member. Brigid was initially surprised and awed by my powers, then slowly became numb to the endless things I showed her. By the time I brought her to my world she was like a dried husk, incapable of further shock. It was funny watching her reaction when I introduced her to Earth and the internet; for obvious reasons I also taught her to use incognito mode. Needless to say, I had to soundproof her room, get her headphones, and add a lock. I imagine it will be a while before we see her again. If needed, I could set up an online shipping account with discreet boxes. Now that I have a breather, I thought I’d take a stroll around Earth. I’ve been adventuring in a fantasy world full of monsters and fights, so a change of scenery is necessary. After being away from Earth I’ve been itching to let loose—if you had awesome powers, wouldn’t you want to use them back home? The thing is, I can’t live in peace if I start acting like a hero in broad daylight. I want recognition, but I don’t want to be disturbed all the time. I could also go the route of being generous, throwing money around, and solving all kinds of problems, but I don’t have the capital or the paperwork to prove where it came from. I can’t just conjure money or fake the records. Where did this itch come from, you ask? Well, since I’ve had nothing to do lately on Earth, I’ve been helping one Miss Julia Mitchel, the prosecutor who helped the girls at the bakery when a scummy scammer had his eye on them. Ever since, I’ve been assisting her with cases—mostly criminal ones in which she needed key evidence or details beyond her reach. Those things aren’t out of my reach: with my all-seeing clairvoyance combined with invisibility illusion magic, it’s a piece of cake for me to serve as a drone and spy on key targets. Her reputation has been transforming into that of a demonic prosecutor who gets justice with a 100% win rate. Her enemies have no idea how she keeps getting important intel and evidence that seems to magically end up in her hands. In return she’s given me legal advice and watched out for the girls whenever possible. It also seems she’s become a regular customer at the bakery—she discovered that the girls who work there have all slowly grown more beautiful. Rumor has it something in the bread helps with skincare and has beautifying effects. It’s almost scary how effective the lucky charm I enchanted and left in the store has been at doing all of this. It feels good helping out a modern hero in the courtroom, but it sucks that I don’t get any credit for it. That’s why I’ve been thinking about what I can do to aid others and stand out without attracting trouble. All of that brings me to the present. I am currently sitting at a table in a bakery, consulting Julia. Although I haven’t revealed my powers to her, she probably knows that I have some serious skills or a notable background. Becca: "So, Julia, what do you think I can do? I’m not asking for any world-ending issues — just something small and local to start. Something to get me into the morning papers." Julia: "Please, just refer to me as Miss Mitchel when we’re in a public place like this. It’s more formal, and I don’t want others poking around in our relationship. Remember, you’re like my secret weapon in the court." Becca: "I know that, I know. But, Prosecutor Mitchel, you must have some ideas for me. I have no problems solving issues with brute force, but that just doesn’t work in modern society. Plus it draws attention to not only me but my businesses. And there’s that thing called the law to be careful of." Julia pushes up her glasses and adopts a thinking pose.
Good evening, or morning, or maybe you’re getting ready to sleep—doesn’t matter. Hello, it’s Becca. I’ve been busy, haven’t I? I met a Dragon God and gained a new party member. Brigid was initially surprised and awed by my powers, then slowly became numb to the endless things I showed her. By the time I brought her to my world she was like a dried husk, incapable of further shock. It was funny watching her reaction when I introduced her to Earth and the internet; for obvious reasons I also taught her to use incognito mode. Needless to say, I had to soundproof her room, get her headphones, and add a lock. I imagine it will be a while before we see her again. If needed, I could set up an online shipping account with discreet boxes. Now that I have a breather, I thought I’d take a stroll around Earth. I’ve been adventuring in a fantasy world full of monsters and fights, so a change of scenery is necessary. After being away from Earth I’ve been itching to let loose—if you had awesome powers, wouldn’t you want to use them back home? The thing is, I can’t live in peace if I start acting like a hero in broad daylight. I want recognition, but I don’t want to be disturbed all the time. I could also go the route of being generous, throwing money around, and solving all kinds of problems, but I don’t have the capital or the paperwork to prove where it came from. I can’t just conjure money or fake the records. Where did this itch come from, you ask? Well, since I’ve had nothing to do lately on Earth, I’ve been helping one Miss Julia Mitchel, the prosecutor who helped the girls at the bakery when a scummy scammer had his eye on them. Ever since, I’ve been assisting her with cases—mostly criminal ones in which she needed key evidence or details beyond her reach. Those things aren’t out of my reach: with my all-seeing clairvoyance combined with invisibility illusion magic, it’s a piece of cake for me to serve as a drone and spy on key targets. Her reputation has been transforming into that of a demonic prosecutor who gets justice with a one hundred percent win rate. Her enemies have no idea how she keeps getting important intel and evidence that seems to magically end up in her hands. In return she’s given me legal advice and watched out for the girls whenever possible. It also seems she’s become a regular customer at the bakery—she discovered that the girls who work there have all slowly grown more beautiful. Rumor has it something in the bread helps with skincare and has beautifying effects. It’s almost scary how effective the lucky charm I enchanted and left in the store has been at doing all of this. It feels good helping out a modern hero in the courtroom, but it sucks that I don’t get any credit for it. That’s why I’ve been thinking about what I can do to aid others and stand out without attracting trouble. All of that brings me to the present. I am currently sitting at a table in a bakery, consulting Julia. Although I haven’t revealed my powers to her, she probably knows that I have some serious skills or a notable background. Becca: "So, Julia, what do you think I can do? I’m not asking for any world-ending issues — just something small and local to start. Something to get me into the morning papers." Julia: "Please, just refer to me as Miss Mitchel when we’re in a public place like this. It’s more formal, and I don’t want others poking around in our relationship. Remember, you’re like my secret weapon in the court." Becca: "I know that, I know. But, Prosecutor Mitchel, you must have some ideas for me. I have no problems solving issues with brute force, but that just doesn’t work in modern society. Plus it draws attention to not only me but my businesses. And there’s that thing called the law to be careful of." Julia pushes up her glasses and adopts a thinking pose.
long_en_172
paper_en
1,982
en
Multilingual support: To serve a global audience, beyond English and Chinese, Qwen2-VL now supports multilingual context understanding within images, including most European languages, Japanese, Korean, Arabic, Vietnamese, and others. Approach The Qwen2-VL series consists of models of 3 sizes, which are Qwen2-VL-2B, Qwen2-VL-7B and Qwen2-VL-72B. Notably, Qwen2-VL employs a 675M parameter ViT across various-sized LLMs, ensuring that the computational load of the ViT remains constant regardless of the scale of the LLM. Model Architecture We have retained the Qwen-VL framework, which integrates vision encoders and language models. For various scale adaptations, we have implemented a Vision Transformer (ViT) with approximately 675 million parameters, adept at handling both image and video inputs. In terms of language processing, we have opted for the more powerful Qwen2 series of language models. To further enhance the model’s ability to effectively perceive and comprehend visual information in videos, we introduced several key upgrades: Naive Dynamic Resolution A key architectural improvement in Qwen2-VL is the introduction of naive dynamic resolution support. Unlike Qwen-VL, Qwen2-VL can now process images of any resolution, dynamically converting them into a variable number of visual tokens. To support this feature, we modified ViT by removing the original absolute position embeddings and introducing 2D-RoPE to capture the two-dimensional positional information of images. At the inference stage, images of varying resolutions are packed into a single sequence, with the packed length controlled to limit GPU memory usage. Furthermore, to reduce the visual tokens of each image, a simple MLP layer is employed after the ViT to compress adjacent 2-by-2 tokens into a single token, with the special <|vision_start|> and <|vision_end|> tokens placed at the beginning and end of the compressed visual tokens. As a result, an image with a resolution of 224-by-224, encoded with a ViT using patch_size=14, will be compressed to 66 tokens before entering LLM. Multimodal Rotary Position Embedding (M-RoPE) Another key architectural enhancement is the innovation of Multimodal Rotary Position Embedding (M-RoPE). Unlike the traditional 1D-RoPE in LLMs, which is limited to encoding one-dimensional positional information, M-RoPE effectively models the positional information of multimodal inputs. This is achieved by deconstructing the original rotary embedding into three components: temporal, height, and width. For text inputs, these components utilize identical position IDs, making M-RoPE functionally equivalent to 1D-RoPE. When processing images, the temporal IDs of each visual token remain constant, while distinct IDs are assigned to the height and width components based on the token's position in the image. For videos, which are treated as sequences of frames, the temporal ID increments for each frame, while the height and width components follow the same ID assignment pattern as images. In scenarios where the model's input encompasses multiple modalities, position numbering for each modality is initialized by incrementing the maximum position ID of the preceding modality by one. M-RoPE not only enhances the modeling of positional information but also reduces the value of position IDs for images and videos, enabling the model to extrapolate to longer sequences during inference. Unified Image and Video Understanding Qwen2-VL employs a mixed training regimen incorporating both image and video data, ensuring proficiency in image understanding and video comprehension. To preserve video information as completely as possible, we sampled each video at two frames per second. Additionally, we integrated 3D convolutions with a depth of two to process video inputs, allowing the model to handle 3D tubes instead of 2D patches, thus enabling it to process more video frames without increasing the sequence length. For consistency, each image is treated as two identical frames. To balance the computational demands of long video processing with overall training efficiency, we dynamically adjust the resolution of each video frame, limiting the total number of tokens per video to 16384. This training approach strikes a balance between the model's ability to comprehend long videos and training efficiency. Training Following Qwen-VL, we adopt a three-stage training methodology. In the first stage, we focus exclusively on training the Vision Transformer (ViT) component, utilizing a vast corpus of image-text pairs to enhance semantic understanding within the Large Language Model (LLM). In the second stage, we unfreeze all parameters and train with a wider range of data for more comprehensive learning. In the final stage, we lock the ViT parameters and perform exclusive fine-tuning of the LLM using instructional datasets. The model is pre-trained on a diverse dataset that includes image-text pairs, optical character recognition (OCR) data, interleaved image-text articles, visual question answering datasets, video dialogues, and image knowledge datasets. Our data sources primarily comprise cleaned web pages, open-source datasets, and synthetic data. The cutoff date for our data knowledge is June 2023. This diverse data composition is instrumental in developing a robust multimodal understanding capability. During the initial pre-training phase, Qwen2-VL is exposed to a corpus of around 600 billion tokens. The LLM component of Qwen2-VL is initialized using the parameters from Qwen2, while the vision encoder of Qwen2-VL is initialized with the ViT derived from DFN. However, the fixed position embedding in the original DFN’s ViT is replaced by RoPE-2D. This pre-training phase primarily focuses on learning image-text relationships, textual content recognition within images through OCR, and image classification tasks. Such foundational training is instrumental in enabling the model to develop a robust understanding of core visual-textual correlations and alignments. The second pre-training phase marks a significant progression, involving an additional 800 billion tokens of image-related data. This stage introduces a higher volume of mixed image-text content, facilitating a more nuanced understanding of the interplay between visual and textual information. The incorporation of visual question answering datasets refines the model's capacity to respond to image-related queries. Moreover, the inclusion of multitasking datasets is pivotal in developing the model's ability to navigate diverse tasks concurrently, a skill of paramount importance when dealing with complex, real-world datasets. Concurrently, purely textual data continues to play a crucial role in maintaining and advancing the model's linguistic proficiency. Throughout the pre-training stages, Qwen2-VL processes a cumulative total of 1.4 trillion tokens. Specifically, these tokens encompass not only text tokens but also image tokens. During the training process, however, we only provide supervision for the text tokens. This exposure to extensive and diverse linguistic and visual scenarios ensures that the model develops a deep understanding of the intricate relationships between visual and textual information, thereby laying a robust foundation for various multimodal tasks. During the instruction fine-tuning phase, we employ the ChatML format to construct instruction-following data. This dataset encompasses not only pure text-based dialogue data but also multimodal conversational data. The multimodal components include image question-answering, document parsing, multi-image comparison, video comprehension, video stream dialogue, and agent-based interactions. Our comprehensive approach to data construction aims to enhance the model's capability to understand and execute a wide range of instructions across various modalities. By incorporating diverse data types, we seek to develop a more versatile and robust language model capable of handling complex, multimodal tasks in addition to traditional text-based interactions. Data Format. In line with Qwen-VL, Qwen2-VL also employs special tokens to distinguish vision and text inputs. Tokens <|vision_start|> and <|vision_end|> are inserted at the start and end of the image feature sequence to demarcate the image content. Dialogue Data. In terms of dialogue format, we construct our instruction tuning dataset using the ChatML format, where each interaction's statement is marked with two special tokens (<|im_start|> and <|im_end|>) to facilitate dialogue termination. The sections marked in blue indicate the supervised parts. Visual Grounding. To endow the model with visual grounding capabilities, bounding box coordinates are normalized within [0, 1000) and represented as "(X top left, Y top left), (X bottom right, Y bottom right)". Tokens <|box_start|> and <|box_end|> are utilized to demarcate bounding box text. To accurately link bounding boxes with their textual descriptions, we introduce tokens <|object_ref_start|> and <|object_ref_end|> to indicate the content that the bounding box references, thereby allowing the model to effectively interpret and generate precise descriptions of specific regions. Visual Agent. To develop Qwen2-VL as a general-purpose VL-Agent, we treat various agent tasks, such as UI Operations, Robotic Control, Games, and Navigation, as sequential decision-making problems, enabling Qwen2-VL to accomplish tasks through multi-step action execution. For each task, we first define a set of permissible actions and keywords pattern (underline) for function call. Qwen2-VL then analyzes the observations, performs reasoning and planning, executes the selected actions, and interacts with the environment to acquire new observations. This cycle repeats iteratively until the task is successfully completed. By integrating various tools and leveraging the vision perception capabilities of large vision-language models (LVLMs), Qwen2-VL is able to iteratively execute increasingly complex tasks involving real-world visual interactions. Multimodal Model Infrastructure The Qwen2-VL models were trained on Alibaba Cloud's PAI-Lingjun Intelligent Computing Service with its scalable computing, auto resuming and straggler detection. Storage. We use Alibaba Cloud's ultra-speed CPFS (Cloud Parallel File Storage) to build a storage system of Qwen2-VL pre-training and post-training. We decoupled the text data and vision data storage. We simply store text data on CPFS and use mmap for efficient access. For vision data, we use Alibaba Cloud's OSS (Object Storage Service) for persistent storage. During training, we accessed vision data through OSS's python-client concurrently and tuned the concurrency and retrying parameters to avoid reaching the QPS (queries per second) limit. We also found that video data decoding is a main bottleneck, especially for long videos. After several attempts with open-source and in-house software failed, we opted for a caching decoding technique. Checkpointing saves each GPU’s optimizer and model states on CPFS. Parallelism. We use 3D parallelism which combines data parallelism (DP), tensor parallelism (TP) and pipeline parallelism (PP) to scale Qwen2-VL model training. We also leverage deepspeed's zero-1 redundancy optimizer to shard states for memory saving. Sequence parallelism (SP) with selective checkpointing activation was leveraged to reduce memory usage. When enabling TP training, we always shard the vision encoder and large language models together but not the vision merger due to its relatively few parameters. We found the TP training would result in different model shared-weights due to the convolution operator's non-deterministic behavior. We resolved this issue by performing offline reduction of the shared weights, thereby avoiding an additional all-reduce communication step. This approach resulted in only a minimal impact on performance. We leverage 1F1B PP for Qwen2-VL 72B training. We combine the vision encoder, vision adapter and several LLM's decoder layers into one stage, and evenly split the remaining decoder layers. Note that the vision and text sequence lengths are dynamic for each data point. We broadcast the dynamic sequence lengths before initiating the 1F1B process and access the shape information using batch indices. We also implemented an interleaved 1F1B PP but found it is slower than the standard 1F1B setting. Software. We use PyTorch version 2.1.2 with CUDA 11.8 for training. Additionally, we leverage flash-attention for efficient training in both the vision encoder and the LLM. We also utilize fused operators such as LayerNorm, RMSNorm, and Adam. Besides this, we leverage the overlap of communication and computation during matrix multiplication in our training process. Experiments In this section, we first evaluate the model's performance by conducting a comparative analysis across a variety of visual benchmarks, demonstrating the advantages of our approach.
Multilingual support: To serve a global audience, beyond English and Chinese, Qwen two VL now supports multilingual context understanding within images, including most European languages, Japanese, Korean, Arabic, Vietnamese, and others. Approach The Qwen two VL series consists of models of three sizes, which are Qwen two VL two B, Qwen two VL seven B and Qwen two VL seventy-two B. Notably, Qwen two VL employs a six hundred seventy-five M parameter ViT across various-sized LLMs, ensuring that the computational load of the ViT remains constant regardless of the scale of the LLM. Model Architecture We have retained the Qwen VL framework, which integrates vision encoders and language models. For various scale adaptations, we have implemented a Vision Transformer (ViT) with approximately six hundred seventy-five million parameters, adept at handling both image and video inputs. In terms of language processing, we have opted for the more powerful Qwen two series of language models. To further enhance the model’s ability to effectively perceive and comprehend visual information in videos, we introduced several key upgrades: Naive Dynamic Resolution A key architectural improvement in Qwen two VL is the introduction of naive dynamic resolution support. Unlike Qwen VL, Qwen two VL can now process images of any resolution, dynamically converting them into a variable number of visual tokens. To support this feature, we modified ViT by removing the original absolute position embeddings and introducing two D RoPE to capture the two-dimensional positional information of images. At the inference stage, images of varying resolutions are packed into a single sequence, with the packed length controlled to limit GPU memory usage. Furthermore, to reduce the visual tokens of each image, a simple MLP layer is employed after the ViT to compress adjacent two by two tokens into a single token, with the special vision start and vision end tokens placed at the beginning and end of the compressed visual tokens. As a result, an image with a resolution of two hundred twenty four by two hundred twenty four, encoded with a ViT using patch size equals fourteen, will be compressed to sixty six tokens before entering LLM. Multimodal Rotary Position Embedding (M-RoPE) Another key architectural enhancement is the innovation of Multimodal Rotary Position Embedding (M-RoPE). Unlike the traditional one D RoPE in LLMs, which is limited to encoding one-dimensional positional information, M-RoPE effectively models the positional information of multimodal inputs. This is achieved by deconstructing the original rotary embedding into three components: temporal, height, and width. For text inputs, these components utilize identical position IDs, making M-RoPE functionally equivalent to one D RoPE. When processing images, the temporal IDs of each visual token remain constant, while distinct IDs are assigned to the height and width components based on the token's position in the image. For videos, which are treated as sequences of frames, the temporal ID increments for each frame, while the height and width components follow the same ID assignment pattern as images. In scenarios where the model's input encompasses multiple modalities, position numbering for each modality is initialized by incrementing the maximum position ID of the preceding modality by one. M RoPE not only enhances the modeling of positional information but also reduces the value of position IDs for images and videos, enabling the model to extrapolate to longer sequences during inference. Unified Image and Video Understanding Qwen two VL employs a mixed training regimen incorporating both image and video data, ensuring proficiency in image understanding and video comprehension. To preserve video information as completely as possible, we sampled each video at two frames per second. Additionally, we integrated three D convolutions with a depth of two to process video inputs, allowing the model to handle three D tubes instead of two D patches, thus enabling it to process more video frames without increasing the sequence length. For consistency, each image is treated as two identical frames. To balance the computational demands of long video processing with overall training efficiency, we dynamically adjust the resolution of each video frame, limiting the total number of tokens per video to sixteen thousand three hundred eighty-four. This training approach strikes a balance between the model's ability to comprehend long videos and training efficiency. Training Following Qwen-VL, we adopt a three-stage training methodology. In the first stage, we focus exclusively on training the Vision Transformer (ViT) component, utilizing a vast corpus of image-text pairs to enhance semantic understanding within the Large Language Model (LLM). In the second stage, we unfreeze all parameters and train with a wider range of data for more comprehensive learning. In the final stage, we lock the ViT parameters and perform exclusive fine-tuning of the LLM using instructional datasets. The model is pre-trained on a diverse dataset that includes image-text pairs, optical character recognition (OCR) data, interleaved image-text articles, visual question answering datasets, video dialogues, and image knowledge datasets. Our data sources primarily comprise cleaned web pages, open-source datasets, and synthetic data. The cutoff date for our data knowledge is June two thousand twenty three. This diverse data composition is instrumental in developing a robust multimodal understanding capability. During the initial pre-training phase, Qwen two VL is exposed to a corpus of around six hundred billion tokens. The LLM component of Qwen two VL is initialized using the parameters from Qwen two, while the vision encoder of Qwen two VL is initialized with the ViT derived from DFN. However, the fixed position embedding in the original DFN’s ViT is replaced by RoPE two D. This pre-training phase primarily focuses on learning image-text relationships, textual content recognition within images through OCR, and image classification tasks. Such foundational training is instrumental in enabling the model to develop a robust understanding of core visual-textual correlations and alignments. The second pre-training phase marks a significant progression, involving an additional eight hundred billion tokens of image-related data. This stage introduces a higher volume of mixed image-text content, facilitating a more nuanced understanding of the interplay between visual and textual information. The incorporation of visual question answering datasets refines the model's capacity to respond to image-related queries. Moreover, the inclusion of multitasking datasets is pivotal in developing the model's ability to navigate diverse tasks concurrently, a skill of paramount importance when dealing with complex, real-world datasets. Concurrently, purely textual data continues to play a crucial role in maintaining and advancing the model's linguistic proficiency. Throughout the pre-training stages, Qwen two-VL processes a cumulative total of one point four trillion tokens. Specifically, these tokens encompass not only text tokens but also image tokens. During the training process, however, we only provide supervision for the text tokens. This exposure to extensive and diverse linguistic and visual scenarios ensures that the model develops a deep understanding of the intricate relationships between visual and textual information, thereby laying a robust foundation for various multimodal tasks. During the instruction fine-tuning phase, we employ the ChatML format to construct instruction-following data. This dataset encompasses not only pure text-based dialogue data but also multimodal conversational data. The multimodal components include image question-answering, document parsing, multi-image comparison, video comprehension, video stream dialogue, and agent-based interactions. Our comprehensive approach to data construction aims to enhance the model's capability to understand and execute a wide range of instructions across various modalities. By incorporating diverse data types, we seek to develop a more versatile and robust language model capable of handling complex, multimodal tasks in addition to traditional text-based interactions. Data Format. In line with Qwen-VL, Qwen2-VL also employs special tokens to distinguish vision and text inputs. Tokens vision start and vision end are inserted at the start and end of the image feature sequence to demarcate the image content. Dialogue Data. In terms of dialogue format, we construct our instruction tuning dataset using the ChatML format, where each interaction's statement is marked with two special tokens (<|im_start|> and <|im_end|>) to facilitate dialogue termination. The sections marked in blue indicate the supervised parts. Visual Grounding. To endow the model with visual grounding capabilities, bounding box coordinates are normalized within [zero, one thousand) and represented as "(X top left, Y top left), (X bottom right, Y bottom right)". Tokens <|box_start|> and <|box_end|> are utilized to demarcate bounding box text. To accurately link bounding boxes with their textual descriptions, we introduce tokens <|object_ref_start|> and <|object_ref_end|> to indicate the content that the bounding box references, thereby allowing the model to effectively interpret and generate precise descriptions of specific regions. Visual Agent. To develop Qwen2-VL as a general-purpose VL-Agent, we treat various agent tasks, such as UI Operations, Robotic Control, Games, and Navigation, as sequential decision-making problems, enabling Qwen2-VL to accomplish tasks through multi-step action execution. For each task, we first define a set of permissible actions and keywords pattern (underline) for function call. Qwen2-VL then analyzes the observations, performs reasoning and planning, executes the selected actions, and interacts with the environment to acquire new observations. This cycle repeats iteratively until the task is successfully completed. By integrating various tools and leveraging the vision perception capabilities of large vision-language models (LVLMs), Qwen two-VL is able to iteratively execute increasingly complex tasks involving real-world visual interactions. Multimodal Model Infrastructure The Qwen two-VL models were trained on Alibaba Cloud's PAI-Lingjun Intelligent Computing Service with its scalable computing, auto resuming and straggler detection. Storage. We use Alibaba Cloud's ultra-speed CPFS (Cloud Parallel File Storage) to build a storage system of Qwen two-VL pre-training and post-training. We decoupled the text data and vision data storage. We simply store text data on CPFS and use mmap for efficient access. For vision data, we use Alibaba Cloud's OSS (Object Storage Service) for persistent storage. During training, we accessed vision data through OSS's python-client concurrently and tuned the concurrency and retrying parameters to avoid reaching the QPS (queries per second) limit. We also found that video data decoding is a main bottleneck, especially for long videos. After several attempts with open-source and in-house software failed, we opted for a caching decoding technique. Checkpointing saves each GPU’s optimizer and model states on CPFS. Parallelism. We use three D parallelism which combines data parallelism (DP), tensor parallelism (TP) and pipeline parallelism (PP) to scale Qwen two-VL model training. We also leverage deepspeed's zero one redundancy optimizer to shard states for memory saving. Sequence parallelism (SP) with selective checkpointing activation was leveraged to reduce memory usage. When enabling TP training, we always shard the vision encoder and large language models together but not the vision merger due to its relatively few parameters. We found the TP training would result in different model shared weights due to the convolution operator's non deterministic behavior. We resolved this issue by performing offline reduction of the shared weights, thereby avoiding an additional all reduce communication step. This approach resulted in only a minimal impact on performance. We leverage one F one B PP for Qwen two VL seventy two B training. We combine the vision encoder, vision adapter and several LLM's decoder layers into one stage, and evenly split the remaining decoder layers. Note that the vision and text sequence lengths are dynamic for each data point. We broadcast the dynamic sequence lengths before initiating the one F one B process and access the shape information using batch indices. We also implemented an interleaved one F one B PP but found it is slower than the standard one F one B setting. Software. We use PyTorch version two point one point two with CUDA eleven point eight for training. Additionally, we leverage flash attention for efficient training in both the vision encoder and the LLM. We also utilize fused operators such as LayerNorm, RMSNorm, and Adam. Besides this, we leverage the overlap of communication and computation during matrix multiplication in our training process. Experiments In this section, we first evaluate the model's performance by conducting a comparative analysis across a variety of visual benchmarks, demonstrating the advantages of our approach.
long_en_329
poet_en
1,035
en
Finished! This is the "Mommy, is Grandma making this quilt for me because she knows how much I love flowers?" quilt. I really intended it to be just a 'utility' quilt for around here in the summer, but how can a grandma resist a request like that? It occurred to me the other day as I was trying to fold the many quilts she already owns, and she insisted she likes them jumbled, that she doesn't even like being covered up when she sleeps! For a couple of years I scrounged pink flower fat quarters intending to make a pink and white Dresden Plate quilt. One day last summer I just got tired of moving the stash from one place to the next, so I cut triangles, sewed them back together in no order, and bordered it in a soft pink. Utility quilt. It is girly pretty, though. And I still have scraps. Today, Patient Husband and I saw a phenomenally good movie. Run, don't walk to the theater to see The King's Speech! Oh my goodness. All is calm, all is bright. This is the picture of the beach and lighthouse this morning. The church has the best real estate in town, and after Mass this morning we stopped so I could take a picture of the ice-encased lighthouse, snowy quiet beach, and the blue sky just starting to peek through the clouds. It really was beautiful, and a sight the summer people just don't appreciate. But we do. We had a quiet day today, just Patient Husband and me. You'd never have known it was the holiday. We had our time with our kids the weekend before, so this weekend was very quiet. After a nap, we went to some friends' house to help eat the leftovers from their Christmas after their kids left. We love their company, so the evening was good. Last night, Christmas Eve, my siblings, their children, our children, and I spent the evening with my mom. That's us. You know Elizabeth and wiggly Adelaide: my daughter, Lisa, my mom, and me. Yummy in your tummy! Now Christmas is over, it's time to start thinking of gooey goodies for the New Year's celebration. That, too, is very quiet for Patient Husband and me. We decided long ago not to pay the prices of going out on New Year's Eve — the weather is usually snowy and horrid — and, believe it or not, the New Year arrives even without our supervision. But I must tell you about this wonderful recipe I ripped from a magazine just before Christmas. I made it, and it's a wonderful (shhh... healthy!) delicious snack that could easily be a meal. Take three different kinds of olives: I used a pimento-stuffed green olive, a black olive, and a burgundy olive. Next time I'll be more adventurous. Drain them. The recipe calls for one cup of each, but I just used the whole jar. It was close enough. Toss the olives with: 1/4 cup olive oil, 1 tablespoon Herbes de Provence (a mixture of rosemary, cracked fennel, thyme, savory, basil, tarragon, dill weed, oregano, lavender, chervil, and marjoram), 8 cloves garlic (I used two tablespoons jarred minced garlic), and one pint grape or cherry tomatoes (I used two pints). Bake at 350 degrees. Posted fifty-nine years ago, this is what I looked like. She's pretty darned cute in that apron she got from Aunt Erin for Christmas! Isn't that the cutest apron you've ever seen? Isn't she the cutest little girl? She took to it like I take to aprons, too. She wore it all day Saturday and Sunday. Everyone kept saying, "This is what I look like in an apron now! Top to bottom." All she needs is the ladle in her hand. I guess it could be worse than to look like this sweetie. We had our family Christmas last weekend, so we corralled the kids for a picture, but poor Charlie was feeling very poorly—isn't someone always sick (or pregnant) over Christmas? Around here it seems so. Usually it's me after being exposed to 450 children at school, more than half of them coughing and sneezing. Charlie was battling a bad fever and just couldn't get his little self off the couch or someone's lap. It was sad to have him miss the fun of playing with Elizabeth and Adelaide, but he just couldn't move. His little sister Cecilia was in top form for the both of them, though. Sweethearts all. We had a good weekend. I hope all of you have a good one this Christmas weekend coming up. This is my recipe file. It's one of those challenges in life. When I need something, I sit on the couch or at the dining table and start going through it. The problem is, I know if it's in here somewhere. I know if it's a torn-out magazine page or a newspaper clipping or a notecard or something scribbled on a napkin. I know what I'm looking for, and if I tried to organize this, or, as Patient Husband suggested, put them on the computer, then I wouldn't know what I'm looking for. I did once: when the kids each got married, I typed out a cookbook for them to take with them to their new life. I included all of the things they liked and the things they'll wish they'd asked for when I'm dead. It would have been so easy to print one out for myself, but I didn't. I wouldn't know what I'm looking for. Friend Marilyn tried to make the peppermint patties from a previous post. Is there anything that says "Christmas baking" like butter? At Christmas you have to be a purist on this. Nothing says lovin' like something baked with butter! It smells wonderful while it's baking, and with the very first taste it shows you cared enough to use the best. I made a small batch of cookies Sunday.
Finished! This is the "Mommy, is Grandma making this quilt for me because she knows how much I love flowers?" quilt. I really intended it to be just a 'utility' quilt for around here in the summer, but how can a grandma resist a request like that? It occurred to me the other day as I was trying to fold the many quilts she already owns, and she insisted she likes them jumbled, that she doesn't even like being covered up when she sleeps! For a couple of years I scrounged pink flower fat quarters intending to make a pink and white Dresden Plate quilt. One day last summer I just got tired of moving the stash from one place to the next, so I cut triangles, sewed them back together in no order, and bordered it in a soft pink. Utility quilt. It is girly pretty, though. And I still have scraps. Today, Patient Husband and I saw a phenomenally good movie. Run, don't walk to the theater to see The King's Speech! Oh my goodness. All is calm, all is bright. This is the picture of the beach and lighthouse this morning. The church has the best real estate in town, and after Mass this morning we stopped so I could take a picture of the ice-encased lighthouse, snowy quiet beach, and the blue sky just starting to peek through the clouds. It really was beautiful, and a sight the summer people just don't appreciate. But we do. We had a quiet day today, just Patient Husband and me. You'd never have known it was the holiday. We had our time with our kids the weekend before, so this weekend was very quiet. After a nap, we went to some friends' house to help eat the leftovers from their Christmas after their kids left. We love their company, so the evening was good. Last night, Christmas Eve, my siblings, their children, our children, and I spent the evening with my mom. That's us. You know Elizabeth and wiggly Adelaide: my daughter, Lisa, my mom, and me. Yummy in your tummy! Now Christmas is over, it's time to start thinking of gooey goodies for the New Year's celebration. That, too, is very quiet for Patient Husband and me. We decided long ago not to pay the prices of going out on New Year's Eve — the weather is usually snowy and horrid — and, believe it or not, the New Year arrives even without our supervision. But I must tell you about this wonderful recipe I ripped from a magazine just before Christmas. I made it, and it's a wonderful (shhh... healthy!) delicious snack that could easily be a meal. Take three different kinds of olives: I used a pimento-stuffed green olive, a black olive, and a burgundy olive. Next time I'll be more adventurous. Drain them. The recipe calls for one cup of each, but I just used the whole jar. It was close enough. Toss the olives with: one quarter cup olive oil, one tablespoon Herbes de Provence (a mixture of rosemary, cracked fennel, thyme, savory, basil, tarragon, dill weed, oregano, lavender, chervil, and marjoram), eight cloves garlic (I used two tablespoons jarred minced garlic), and one pint grape or cherry tomatoes (I used two pints). Bake at three hundred fifty degrees. Posted fifty-nine years ago, this is what I looked like. She's pretty darned cute in that apron she got from Aunt Erin for Christmas! Isn't that the cutest apron you've ever seen? Isn't she the cutest little girl? She took to it like I take to aprons, too. She wore it all day Saturday and Sunday. Everyone kept saying, "This is what I look like in an apron now! Top to bottom." All she needs is the ladle in her hand. I guess it could be worse than to look like this sweetie. We had our family Christmas last weekend, so we corralled the kids for a picture, but poor Charlie was feeling very poorly—isn't someone always sick (or pregnant) over Christmas? Around here it seems so. Usually it's me after being exposed to four hundred fifty children at school, more than half of them coughing and sneezing. Charlie was battling a bad fever and just couldn't get his little self off the couch or someone's lap. It was sad to have him miss the fun of playing with Elizabeth and Adelaide, but he just couldn't move. His little sister Cecilia was in top form for the both of them, though. Sweethearts all. We had a good weekend. I hope all of you have a good one this Christmas weekend coming up. This is my recipe file. It's one of those challenges in life. When I need something, I sit on the couch or at the dining table and start going through it. The problem is, I know if it's in here somewhere. I know if it's a torn-out magazine page or a newspaper clipping or a notecard or something scribbled on a napkin. I know what I'm looking for, and if I tried to organize this, or, as Patient Husband suggested, put them on the computer, then I wouldn't know what I'm looking for. I did once: when the kids each got married, I typed out a cookbook for them to take with them to their new life. I included all of the things they liked and the things they'll wish they'd asked for when I'm dead. It would have been so easy to print one out for myself, but I didn't. I wouldn't know what I'm looking for. Friend Marilyn tried to make the peppermint patties from a previous post. Is there anything that says "Christmas baking" like butter? At Christmas you have to be a purist on this. Nothing says lovin' like something baked with butter! It smells wonderful while it's baking, and with the very first taste it shows you cared enough to use the best. I made a small batch of cookies Sunday.
long_en_281
wiki_en
912
en
The cyclic redundancy check (CRC) is based on division in the ring of polynomials over the finite field GF(2) (integers modulo 2): coefficients are 0 or 1 and arithmetic is performed modulo 2. Any string of bits can be interpreted as the coefficients of a message polynomial M(x). To compute the CRC, multiply M(x) by x^n, where n is the degree of the generator polynomial G(x), and then divide x^nM(x) by G(x). The remainder R(x) of this Euclidean division, with degree strictly less than n, has coefficients that form the CRC bits; the quotient is not used. Equivalently, R(x) = x^nM(x) mod G(x), and the transmitted codeword is T(x) = x^nM(x) + R(x), which is divisible by G(x). On reception, the receiver either separates M and R and recomputes R to compare with the received R, or checks that the received codeword is divisible by G(x); if the remainders match (or the remainder is zero), the receiver assumes the message is correct. In practice, CRC calculations most closely resemble long division in binary, except that the subtractions involved do not borrow from more significant digits and thus become exclusive-or operations. A CRC is a checksum in a strict mathematical sense, as it can be expressed as the weighted modulo-2 sum of per-bit syndromes, but that term is generally reserved for sums computed using larger moduli such as 10, 256, or 65535. CRCs can also be used as part of error-correcting codes, which allow not only the detection of transmission errors but also the reconstruction of the correct message. These codes are based on closely related mathematical principles. Polynomial arithmetic modulo 2 Since the coefficients are constrained to a single bit, any arithmetic operation on CRC polynomials must map the result’s coefficients to either zero or one. In addition, coefficients are added modulo 2, so polynomial addition modulo 2 is the same as bitwise XOR. Since XOR is its own inverse, polynomial subtraction modulo 2 is also the same as bitwise XOR. Multiplication is similar (a carry-less product). We can also divide polynomials modulo 2 and find the quotient and remainder; the remainder is used in CRC calculations. In the above equations, M(x) represents the original message bits, G(x) is the generator polynomial, and the remainder R(x) is the CRC. If the degree of the generator polynomial is r, we first multiply the message polynomial by x^r to append r zeros. Variations: There are several standard variations on CRCs, any or all of which may be used with any CRC polynomial. Implementation variations such as endianness and CRC presentation only affect the mapping of bit strings to the coefficients of M(x) and G(x) and do not change the algorithm's properties. To check the CRC, instead of calculating the CRC on the message and comparing it to the transmitted CRC, a CRC calculation may be run on the entire codeword. If the result (called the residual) is zero, the check passes. This works because the codeword is divisible by G(x). This simplifies many implementations by avoiding the need to treat the last few bytes of the message specially when checking CRCs. The shift register may be initialized with ones instead of zeros. This is equivalent to inverting the first r bits of the message before feeding them into the algorithm. The CRC equation becomes M(x)x^r + R(x) ≡ 0 (mod G(x)), where r is the length of the CRC in bits. The change this imposes on R(x) is a function of the generating polynomial and the message length. This method is used because an unmodified CRC does not distinguish between two messages that differ only in the number of leading zeros, since leading zeros do not affect the value of M(x). When this inversion is done, the CRC does distinguish between such messages. The CRC may be inverted before being appended to the message stream. While an unmodified CRC distinguishes between messages with different numbers of trailing zeroes, it does not detect trailing zeroes appended after the CRC remainder itself. This is because all valid codewords are multiples of the generator polynomial, so any multiple of a valid codeword is also a multiple. (In fact, this is precisely why the first variant described above works.) In practice, the last two variations are almost always used together. They change the transmitted CRC, so they must be implemented at both the transmitter and the receiver. While presetting the shift register to ones is straightforward to do at both ends, inverting affects receivers implementing the first variation, because the CRC of a full codeword that already includes a CRC is no longer zero. Instead, it is a fixed non-zero pattern: the CRC of the inversion pattern of ones. The CRC may thus be checked either by the obvious method of computing the CRC on the message, inverting it, and comparing the result with the CRC in the message stream, or by calculating the CRC on the entire codeword and comparing it with an expected fixed value, called the check polynomial, residue, or magic number. This value may be computed directly, or equivalently by computing the unmodified CRC of a message consisting of all ones. These inversions are very common but not universally performed, even for CRC-32 and CRC-16-CCITT.
The cyclic redundancy check (CRC) is based on division in the ring of polynomials over the finite field GF(two) (integers modulo two): coefficients are zero or one and arithmetic is performed modulo two. Any string of bits can be interpreted as the coefficients of a message polynomial M(x). To compute the CRC, multiply M(x) by x to the n, where n is the degree of the generator polynomial G(x), and then divide x to the n M(x) by G(x). The remainder R(x) of this Euclidean division, with degree strictly less than n, has coefficients that form the CRC bits; the quotient is not used. Equivalently, R(x) equals x to the n M(x) mod G(x), and the transmitted codeword is T(x) equals x to the n M(x) plus R(x), which is divisible by G(x). On reception, the receiver either separates M and R and recomputes R to compare with the received R, or checks that the received codeword is divisible by G(x); if the remainders match (or the remainder is zero), the receiver assumes the message is correct. In practice, CRC calculations most closely resemble long division in binary, except that the subtractions involved do not borrow from more significant digits and thus become exclusive-or operations. A CRC is a checksum in a strict mathematical sense, as it can be expressed as the weighted modulo-two sum of per-bit syndromes, but that term is generally reserved for sums computed using larger moduli such as ten, two hundred fifty six, or sixty five thousand five hundred thirty five. CRCs can also be used as part of error-correcting codes, which allow not only the detection of transmission errors but also the reconstruction of the correct message. These codes are based on closely related mathematical principles. Polynomial arithmetic modulo two Since the coefficients are constrained to a single bit, any arithmetic operation on CRC polynomials must map the result’s coefficients to either zero or one. In addition, coefficients are added modulo two, so polynomial addition modulo two is the same as bitwise XOR. Since XOR is its own inverse, polynomial subtraction modulo two is also the same as bitwise XOR. Multiplication is similar (a carry-less product). We can also divide polynomials modulo two and find the quotient and remainder; the remainder is used in CRC calculations. In the above equations, M of x represents the original message bits, G of x is the generator polynomial, and the remainder R of x is the CRC. If the degree of the generator polynomial is r, we first multiply the message polynomial by x to the r to append r zeros. Variations: There are several standard variations on CRCs, any or all of which may be used with any CRC polynomial. Implementation variations such as endianness and CRC presentation only affect the mapping of bit strings to the coefficients of M of x and G of x and do not change the algorithm's properties. To check the CRC, instead of calculating the CRC on the message and comparing it to the transmitted CRC, a CRC calculation may be run on the entire codeword. If the result (called the residual) is zero, the check passes. This works because the codeword is divisible by G(x). This simplifies many implementations by avoiding the need to treat the last few bytes of the message specially when checking CRCs. The shift register may be initialized with ones instead of zeros. This is equivalent to inverting the first r bits of the message before feeding them into the algorithm. The CRC equation becomes M(x) x to the r plus R(x) is congruent to zero (mod G(x)), where r is the length of the CRC in bits. The change this imposes on R(x) is a function of the generating polynomial and the message length. This method is used because an unmodified CRC does not distinguish between two messages that differ only in the number of leading zeros, since leading zeros do not affect the value of M(x). When this inversion is done, the CRC does distinguish between such messages. The CRC may be inverted before being appended to the message stream. While an unmodified CRC distinguishes between messages with different numbers of trailing zeroes, it does not detect trailing zeroes appended after the CRC remainder itself. This is because all valid codewords are multiples of the generator polynomial, so any multiple of a valid codeword is also a multiple. (In fact, this is precisely why the first variant described above works.) In practice, the last two variations are almost always used together. They change the transmitted CRC, so they must be implemented at both the transmitter and the receiver. While presetting the shift register to ones is straightforward to do at both ends, inverting affects receivers implementing the first variation, because the CRC of a full codeword that already includes a CRC is no longer zero. Instead, it is a fixed non zero pattern: the CRC of the inversion pattern of ones. The CRC may thus be checked either by the obvious method of computing the CRC on the message, inverting it, and comparing the result with the CRC in the message stream, or by calculating the CRC on the entire codeword and comparing it with an expected fixed value, called the check polynomial, residue, or magic number. This value may be computed directly, or equivalently by computing the unmodified CRC of a message consisting of all ones. These inversions are very common but not universally performed, even for CRC thirty two and CRC sixteen CCITT.
long_en_330
poet_en
951
en
The soldiers showed up on Mari's birthday, so she thought it was her fault. I told her it was only coincidence, but Lily took her away anyhow. Lily said I was wasting my time; it was never about Mari's birthday. "Mari is a woman now. No matter what you tell her, she's going to think everything is her fault. Your validation is obsolete. From now on, she'll always look for things she can blame on herself." That was twenty-eight days ago. The soldiers are still here. It wasn't Mari or the soldiers. Lily wanted to leave. She just needed a good excuse. An army of soldiers marching around in the apartment building was good enough. The soldiers never sleep. Day and night they march up and down the stairs in a collapsed, winding circle. The building is three stories tall with crosscutting half staircases between floors. There are just enough soldiers that they are always passing one another going in the other direction. All of them are there for the same reason, going in the opposite and same directions all at once. Things at work weren't going well either. My boss, Patrick, launched what he called The Encouragement Program. I was chosen to be the first participant because of what he referred to as my "consistent and impressive performance." My Encouragement was named Sean. We shared my office. He did the same thing I did. The exact same thing. They gave him my clients, or rather, made me share my clients with him. The premise behind the program was that competition would foster higher productivity. Then, once I was outperforming my Encouragement, they would send Sean back and bring in another Encouragement for someone else. We would do this until everyone had been through the program and then start the whole thing over again. In theory, the program would improve overall performance so much that before long we would have to hire more people on a permanent basis. It was supposed to be good for everyone. For a while, Brian, who lives on the top floor, thought it was his fault the soldiers were there. He never said so, but he's been tiptoeing around this place ever since they got here. He had told me about a time a couple of years ago when he hit his girlfriend. It had never happened before and hadn't happened since. He said it was a cross between a punch and a slap, but closer to a punch. They had been sitting on the couch on a Friday or Saturday night, and his girlfriend, Shannon, kept talking on He said he was as surprised as she was at what he had done, and for the next few minutes they just sat on the couch with blank expressions, Shannon tenderly feeling her face while he gingerly squeezed his outstretched fingers. It left a big dark bruise on her face. They called off work for a week to give the bruise time to fade. When their respective supervisors asked, they each said the other had surprised them with a cruise. He had never told anyone else about that, and for the life of him couldn't figure out what made him tell me. But he did. He didn't ask me not to tell anyone, so I told everyone. Not because I don't like him—I do—but now he makes me a little nervous about myself, and talking about it helps keep my nerves at bay. After a couple of weeks, my encouragement started to get the best of me. I was doing my best, but I was exhausted and this Sean guy was like a machine, a machine wearing a disguise of prosthetics and makeup to look like me. He told me this one night when we were working late. Well, I was working late; he just stayed to tell me how he had managed to do everything faster than me and ask why I wasn't doing it that way. I told him to lay off. I hadn't been getting much sleep lately. Up to that point I had done a good job of keeping what was going on at home a secret, but I was exhausted and that night I slipped and said, "You try and sleep every night with an army marching outside your door." So I had to tell him about the soldiers, and he told me that was no excuse. That he didn't get much sleep either on account of having to wake up two hours earlier than normal to put this whole "get up" on. Neither of us believed the other. I never thought he looked that much like me to begin with. Lisa, from down the hall, thought they were there to take her away. That they were just waiting for the right time when no one would notice. I've tried to calm her down, always over the phone, because she won't come out anymore. I tell her that if they were there for her, she would already be gone. That if they took her, if that's what they were there for, then they would leave once they had her, and even if no one noticed she was gone, we would certainly notice the soldiers were gone, and someone would do something to get her back or at least make sure she was okay. It didn't do any good. She worried about not being missed. She had family, kids that lived a couple miles away, but they hadn't come to visit since she had moved in. That was five years ago.
The soldiers showed up on Mari's birthday, so she thought it was her fault. I told her it was only coincidence, but Lily took her away anyhow. Lily said I was wasting my time; it was never about Mari's birthday. "Mari is a woman now. No matter what you tell her, she's going to think everything is her fault. Your validation is obsolete. From now on, she'll always look for things she can blame on herself." That was twenty eight days ago. The soldiers are still here. It wasn't Mari or the soldiers. Lily wanted to leave. She just needed a good excuse. An army of soldiers marching around in the apartment building was good enough. The soldiers never sleep. Day and night they march up and down the stairs in a collapsed, winding circle. The building is three stories tall with crosscutting half staircases between floors. There are just enough soldiers that they are always passing one another going in the other direction. All of them are there for the same reason, going in the opposite and same directions all at once. Things at work weren't going well either. My boss, Patrick, launched what he called The Encouragement Program. I was chosen to be the first participant because of what he referred to as my "consistent and impressive performance." My Encouragement was named Sean. We shared my office. He did the same thing I did. The exact same thing. They gave him my clients, or rather, made me share my clients with him. The premise behind the program was that competition would foster higher productivity. Then, once I was outperforming my Encouragement, they would send Sean back and bring in another Encouragement for someone else. We would do this until everyone had been through the program and then start the whole thing over again. In theory, the program would improve overall performance so much that before long we would have to hire more people on a permanent basis. It was supposed to be good for everyone. For a while, Brian, who lives on the top floor, thought it was his fault the soldiers were there. He never said so, but he's been tiptoeing around this place ever since they got here. He had told me about a time a couple of years ago when he hit his girlfriend. It had never happened before and hadn't happened since. He said it was a cross between a punch and a slap, but closer to a punch. They had been sitting on the couch on a Friday or Saturday night, and his girlfriend, Shannon, kept talking on He said he was as surprised as she was at what he had done, and for the next few minutes they just sat on the couch with blank expressions, Shannon tenderly feeling her face while he gingerly squeezed his outstretched fingers. It left a big dark bruise on her face. They called off work for a week to give the bruise time to fade. When their respective supervisors asked, they each said the other had surprised them with a cruise. He had never told anyone else about that, and for the life of him couldn't figure out what made him tell me. But he did. He didn't ask me not to tell anyone, so I told everyone. Not because I don't like him—I do—but now he makes me a little nervous about myself, and talking about it helps keep my nerves at bay. After a couple of weeks, my encouragement started to get the best of me. I was doing my best, but I was exhausted and this Sean guy was like a machine, a machine wearing a disguise of prosthetics and makeup to look like me. He told me this one night when we were working late. Well, I was working late; he just stayed to tell me how he had managed to do everything faster than me and ask why I wasn't doing it that way. I told him to lay off. I hadn't been getting much sleep lately. Up to that point I had done a good job of keeping what was going on at home a secret, but I was exhausted and that night I slipped and said, "You try and sleep every night with an army marching outside your door." So I had to tell him about the soldiers, and he told me that was no excuse. That he didn't get much sleep either on account of having to wake up two hours earlier than normal to put this whole "get up" on. Neither of us believed the other. I never thought he looked that much like me to begin with. Lisa, from down the hall, thought they were there to take her away. That they were just waiting for the right time when no one would notice. I've tried to calm her down, always over the phone, because she won't come out anymore. I tell her that if they were there for her, she would already be gone. That if they took her, if that's what they were there for, then they would leave once they had her, and even if no one noticed she was gone, we would certainly notice the soldiers were gone, and someone would do something to get her back or at least make sure she was okay. It didn't do any good. She worried about not being missed. She had family, kids that lived a couple miles away, but they hadn't come to visit since she had moved in. That was five years ago.
long_en_310
wiki_en
971
en
There are many ways to garden in restricted spaces. A small or limited area is often an issue when growing and cultivating plants. Restricted-space gardens can be located on small lawns, balconies, patios, porches, rooftops, inside the home, or in any available place. Gardening in small spaces can include edible or ornamental plants. Growing food has many benefits, including saving money, producing healthier, fresher, and better-tasting food, and allowing control over pesticide and fertilizer exposure. Gardening is a good form of exercise and is therapeutic. Square foot gardening was popularized by Mel Bartholomew in the early 1980s. He wrote several books, appeared on television, and created a website on the subject. The basic idea is to use a box with equal length and width and divide it into one-square-foot areas. The original design was a box six to eight inches deep with four-foot sides, divided into 16 squares. The box can be placed on the ground or on supports so people who cannot bend easily or for long periods can garden as well. Boxes on supports need a bottom; boxes on the ground do not, although a weed barrier is recommended. Even if the box is on the ground, existing soil conditions are irrelevant because the box is filled with its own mix. It is recommended to fill the box with Mel's Mix. Mel Bartholomew created this mix and claims it never needs to be replaced. To make it, combine compost (homemade or store-bought), peat moss, and coarse vermiculite in equal parts by volume. After the soil is in the box, use string, stakes, or pieces of timber to divide the box into equal one-foot-by-one-foot squares. The ideal placement is a spot that receives six to eight hours of sunlight per day and is away from shading trees or shrubs. The box should be accessible from all sides; if this is impossible, make a narrower box (three by four feet) or a smaller box (three by three feet). Never walk in the plot — it will compact the soil and ruin the dynamics of the box. How many seeds to plant in each square foot depends on plant size: small crops (such as radishes and carrots) can be planted 16 per square with three inches between plants; medium plants (spinach, large turnips, bush beans) should have nine per square with four-inch spacing; large plants that need six-inch spacing (leaf lettuce, parsley, etc.) can be planted four per square; and extra-large plants (broccoli, cauliflower, cabbage, peppers) require a whole square for each plant. Certain plants require more than six inches of soil, such as root crops like carrots and potatoes and extra-long scallions and leeks. For these, a box one foot by one foot by six inches high can be created and placed on top of an existing square. Vine crops such as cucumbers, tomatoes, squash, melons, and pumpkins need vertical support. A variety of items can be used for support, such as electrical conduit, synthetic string, or nylon netting attached to metal supports; the supports should be attached to one side of the box. Do not position the vertical support so it shades the rest of the plot. After harvesting, add compost and replant the square with a different crop. Container or bucket gardening involves growing plants in some type of container, whether commercially produced or an everyday object such as a 5-gallon bucket, wooden crate, plastic storage container, or kiddie pool. Container gardening is convenient for those with limited space because the containers can be placed anywhere and as single items they take up very little room. There are also fewer weeds and reduced watering needs. It is inexpensive, and gardeners have personal control over growing conditions. To get started, find a container and make sure it has a hole in the bottom for drainage. Be aware that dark-colored containers heat up more and can harm plants. Porous containers dry out faster than metal or plastic, and previous contents (such as paint) may be toxic to plants and people. Place containers in western or southern exposure for the sunniest, warmest conditions, or in eastern or northern exposure for shadier, cooler conditions. Warm-season crops (squash, eggplant, tomato, pepper, etc.) need six to eight hours of direct sun; cool-season crops (Asian greens, spinach, lettuce, etc.) need three to five hours. Fill containers with a light, porous growing medium—commercial soilless mixes work well. Coarse builder's sand is useful because it is porous and heavy, which helps weigh down containers, and compost is highly recommended. Some good media mixtures for container vegetables include: 100% compost; 100% soilless mix; 25% garden soil + 75% compost; 25% soilless mix + 25% garden soil + 50% compost; 25% garden soil + 75% soilless mix; and 50% soilless mix + 50% compost (recommended by the Maryland Cooperative Extension). If you use fertilizer, choose a slow-release type. Almost any herb or vegetable can be grown in a container; look for seed packages labeled dwarf, bush, or small if space is limited. The University of Maryland Cooperative Extension recommends a growing-media depth of four to six inches for leaf greens, Asian greens, mustards, garlic, radishes, basil, cilantro, thyme, mint, and marjoram. Salad greens and some herbs have shallow, fibrous root systems and are well suited to shallow containers with a large surface area. Eight- to twelve-inch pots are ideal for beans, beets, chard, carrots, cabbage, peppers, eggplant, tomatoes, squash, rosemary, parsley, lavender, and fennel. Pot volume can also vary by crop; one- to three-gallon containers are suitable for herbs, green onions, radishes, onions, chard, peppers, dwarf tomatoes, and dwarf cucumbers. A larger size of four to five gallons is recommended for full-size tomatoes, cucumbers, eggplant, beans, peas, cabbage, and broccoli.
There are many ways to garden in restricted spaces. A small or limited area is often an issue when growing and cultivating plants. Restricted-space gardens can be located on small lawns, balconies, patios, porches, rooftops, inside the home, or in any available place. Gardening in small spaces can include edible or ornamental plants. Growing food has many benefits, including saving money, producing healthier, fresher, and better-tasting food, and allowing control over pesticide and fertilizer exposure. Gardening is a good form of exercise and is therapeutic. Square foot gardening was popularized by Mel Bartholomew in the early nineteen eighties. He wrote several books, appeared on television, and created a website on the subject. The basic idea is to use a box with equal length and width and divide it into one-square-foot areas. The original design was a box six to eight inches deep with four-foot sides, divided into sixteen squares. The box can be placed on the ground or on supports so people who cannot bend easily or for long periods can garden as well. Boxes on supports need a bottom; boxes on the ground do not, although a weed barrier is recommended. Even if the box is on the ground, existing soil conditions are irrelevant because the box is filled with its own mix. It is recommended to fill the box with Mel's Mix. Mel Bartholomew created this mix and claims it never needs to be replaced. To make it, combine compost (homemade or store-bought), peat moss, and coarse vermiculite in equal parts by volume. After the soil is in the box, use string, stakes, or pieces of timber to divide the box into equal one-foot-by-one-foot squares. The ideal placement is a spot that receives six to eight hours of sunlight per day and is away from shading trees or shrubs. The box should be accessible from all sides; if this is impossible, make a narrower box (three by four feet) or a smaller box (three by three feet). Never walk in the plot — it will compact the soil and ruin the dynamics of the box. How many seeds to plant in each square foot depends on plant size: small crops (such as radishes and carrots) can be planted sixteen per square with three inches between plants; medium plants (spinach, large turnips, bush beans) should have nine per square with four-inch spacing; large plants that need six-inch spacing (leaf lettuce, parsley, etc.) can be planted four per square; and extra-large plants (broccoli, cauliflower, cabbage, peppers) require a whole square for each plant. Certain plants require more than six inches of soil, such as root crops like carrots and potatoes and extra-long scallions and leeks. For these, a box one foot by one foot by six inches high can be created and placed on top of an existing square. Vine crops such as cucumbers, tomatoes, squash, melons, and pumpkins need vertical support. A variety of items can be used for support, such as electrical conduit, synthetic string, or nylon netting attached to metal supports; the supports should be attached to one side of the box. Do not position the vertical support so it shades the rest of the plot. After harvesting, add compost and replant the square with a different crop. Container or bucket gardening involves growing plants in some type of container, whether commercially produced or an everyday object such as a five-gallon bucket, wooden crate, plastic storage container, or kiddie pool. Container gardening is convenient for those with limited space because the containers can be placed anywhere and as single items they take up very little room. There are also fewer weeds and reduced watering needs. It is inexpensive, and gardeners have personal control over growing conditions. To get started, find a container and make sure it has a hole in the bottom for drainage. Be aware that dark-colored containers heat up more and can harm plants. Porous containers dry out faster than metal or plastic, and previous contents (such as paint) may be toxic to plants and people. Place containers in western or southern exposure for the sunniest, warmest conditions, or in eastern or northern exposure for shadier, cooler conditions. Warm-season crops (squash, eggplant, tomato, pepper, etc.) need six to eight hours of direct sun; cool-season crops (Asian greens, spinach, lettuce, etc.) need three to five hours. Fill containers with a light, porous growing medium—commercial soilless mixes work well. Coarse builder's sand is useful because it is porous and heavy, which helps weigh down containers, and compost is highly recommended. Some good media mixtures for container vegetables include: one hundred percent compost; one hundred percent soilless mix; twenty five percent garden soil plus seventy five percent compost; twenty five percent soilless mix plus twenty five percent garden soil plus fifty percent compost; twenty five percent garden soil plus seventy five percent soilless mix; and fifty percent soilless mix plus fifty percent compost (recommended by the Maryland Cooperative Extension). If you use fertilizer, choose a slow-release type. Almost any herb or vegetable can be grown in a container; look for seed packages labeled dwarf, bush, or small if space is limited. The University of Maryland Cooperative Extension recommends a growing-media depth of four to six inches for leaf greens, Asian greens, mustards, garlic, radishes, basil, cilantro, thyme, mint, and marjoram. Salad greens and some herbs have shallow, fibrous root systems and are well suited to shallow containers with a large surface area. Eight- to twelve-inch pots are ideal for beans, beets, chard, carrots, cabbage, peppers, eggplant, tomatoes, squash, rosemary, parsley, lavender, and fennel. Pot volume can also vary by crop; one- to three-gallon containers are suitable for herbs, green onions, radishes, onions, chard, peppers, dwarf tomatoes, and dwarf cucumbers. A larger size of four to five gallons is recommended for full-size tomatoes, cucumbers, eggplant, beans, peas, cabbage, and broccoli.
long_en_373
poet_en
899
en
I've been a stay-at-home mom for nine years now—nine long years. The last paying job I had was with AAFES; I quit because we were due to come back to the States since Barry was discharged from the Army. I have been a stay-at-home mom from when I was seven months pregnant with the twins until now. During these last nine years I've been a wife (and all that entails), mommy, homemaker, housekeeper, nurse, doctor, hairdresser, lawn mower, garbage taker-outer, Christmas present wrapper—everything. I am now everything but the wife, and I feel lost, confused, bored... everything. Our plan was for me to get a job once the baby started first grade, which is in three years. But honestly, I don't know that I can wait that long. I might have to, though, but I refuse to put the kid in daycare just to get a job. It's a conundrum. I don't know what to do with myself. I'm so bored. I don't think there has been one day since the family left that we haven't gone out and done something. Either we're running errands, going to the mall, going to the library, going to friends' houses, or going on field trips with my Meetup group. The thing is, this is my life; this has been my life for the last nine years, so why am I suddenly disenchanted with it? I feel like a contradiction. How can I be a stay-at-home mom if I don't have a husband? Well, I do have a husband, but he's been stuffed into a nice wooden box on my entertainment center, and a plastic bag filled with extras that I requested for us to go to Build-A-Bear. If that sounds callous, you should know I just have to joke sometimes; Barry called it gallows humor. I never understood it until he died. Anyway, I just don't see what my purpose is right now. I can still be a stay-at-home mom; I don't have to work if I don't want to, but I am bored and uninspired. I've thought about looking into some sort of schooling, but I don't want to do that till the baby is in school either. I can't take time away from the kids right now because they need me. I have tons of crafts to catch up on. I was pondering a few things this morning, most of them beginning with the word "When?" When do I pack away his stuff? I do know there is no right or wrong answer to this question. I simply do it when I feel ready. I did have to clean out his truck two weeks ago, and the house was inundated with his belongings. Most of it I have incorporated into the house, like the food he left behind, and his can opener went in the kitchen. I've taken over his computers, but do I really need one PC (mine), his laptop, and his Eee PC? I have an extra BlackBerry Tour now, because we both got new phones in August. I also have his iPhone that he cancelled service for in August. I'll probably sell that one. The BlackBerry I'll keep because I have a bad history with phones, and this one will be a backup for me. But what about his clothes? His blankets we will keep, and the foamy eggshell bed thingies I have in the boy's room to put on the kids' beds as soon as I remember to do so. I sleep with his pillow between my knees each night, and the last T-shirt he wore is folded up under my pillow. When I packed up his truck, all the clean and dirty clothes were separated into different bags, but all stuffed into his big green Army duffel bag. They're still in there, sitting in a corner in my room. I haven't opened them in two weeks. I just don't want to deal with it. All the stuff I had at the hospital for him is still in the bags: the last pair of shorts he wore, the book he was going to read, the chest hair I clipped off his chest after he died, the handful of crumpled-up tissues I had when I was crying and saying goodbye, the body wipes he had used to wipe his face off before he died. Little things like that I just have in the hospital bag. I looked at them this morning, but I haven't done that in a long time. I don't know what to do with them — do I keep them in the bag, or do I get rid of what I just want to keep? Do I really need to keep the tissues? What about all the electronics? His DVDs? I'll be calling XM soon and cancelling one subscription. I feel good today! I hope the mood sticks because I've needed a positive day for a while. I went to the library's storytime today with my youngest and saw a friend from my kids' school. She helped coordinate meals and donations for us, so it was nice to talk with her.
I've been a stay-at-home mom for nine years now—nine long years. The last paying job I had was with AAFES; I quit because we were due to come back to the States since Barry was discharged from the Army. I have been a stay-at-home mom from when I was seven months pregnant with the twins until now. During these last nine years I've been a wife (and all that entails), mommy, homemaker, housekeeper, nurse, doctor, hairdresser, lawn mower, garbage taker-outer, Christmas present wrapper—everything. I am now everything but the wife, and I feel lost, confused, bored... everything. Our plan was for me to get a job once the baby started first grade, which is in three years. But honestly, I don't know that I can wait that long. I might have to, though, but I refuse to put the kid in daycare just to get a job. It's a conundrum. I don't know what to do with myself. I'm so bored. I don't think there has been one day since the family left that we haven't gone out and done something. Either we're running errands, going to the mall, going to the library, going to friends' houses, or going on field trips with my Meetup group. The thing is, this is my life; this has been my life for the last nine years, so why am I suddenly disenchanted with it? I feel like a contradiction. How can I be a stay-at-home mom if I don't have a husband? Well, I do have a husband, but he's been stuffed into a nice wooden box on my entertainment center, and a plastic bag filled with extras that I requested for us to go to Build-A-Bear. If that sounds callous, you should know I just have to joke sometimes; Barry called it gallows humor. I never understood it until he died. Anyway, I just don't see what my purpose is right now. I can still be a stay-at-home mom; I don't have to work if I don't want to, but I am bored and uninspired. I've thought about looking into some sort of schooling, but I don't want to do that till the baby is in school either. I can't take time away from the kids right now because they need me. I have tons of crafts to catch up on. I was pondering a few things this morning, most of them beginning with the word "When?" When do I pack away his stuff? I do know there is no right or wrong answer to this question. I simply do it when I feel ready. I did have to clean out his truck two weeks ago, and the house was inundated with his belongings. Most of it I have incorporated into the house, like the food he left behind, and his can opener went in the kitchen. I've taken over his computers, but do I really need one PC (mine), his laptop, and his Eee PC? I have an extra BlackBerry Tour now, because we both got new phones in August. I also have his iPhone that he cancelled service for in August. I'll probably sell that one. The BlackBerry I'll keep because I have a bad history with phones, and this one will be a backup for me. But what about his clothes? His blankets we will keep, and the foamy eggshell bed thingies I have in the boy's room to put on the kids' beds as soon as I remember to do so. I sleep with his pillow between my knees each night, and the last T-shirt he wore is folded up under my pillow. When I packed up his truck, all the clean and dirty clothes were separated into different bags, but all stuffed into his big green Army duffel bag. They're still in there, sitting in a corner in my room. I haven't opened them in two weeks. I just don't want to deal with it. All the stuff I had at the hospital for him is still in the bags: the last pair of shorts he wore, the book he was going to read, the chest hair I clipped off his chest after he died, the handful of crumpled-up tissues I had when I was crying and saying goodbye, the body wipes he had used to wipe his face off before he died. Little things like that I just have in the hospital bag. I looked at them this morning, but I haven't done that in a long time. I don't know what to do with them — do I keep them in the bag, or do I get rid of what I just want to keep? Do I really need to keep the tissues? What about all the electronics? His DVDs? I'll be calling XM soon and cancelling one subscription. I feel good today! I hope the mood sticks because I've needed a positive day for a while. I went to the library's storytime today with my youngest and saw a friend from my kids' school. She helped coordinate meals and donations for us, so it was nice to talk with her.
long_en_342
poet_en
943
en
Most nights I have time to start a craft project. I've been diving headfirst into learning how to sew. Nights are the best time to sew because Virginia is sound asleep; I don't have to worry about her climbing into my lap or pulling fabric off the table while I cut. At night I can enjoy the peace and quiet and focus all my attention on the YouTube instructional video or the pattern directions I'm trying to figure out. I've really enjoyed these nights so far. Tonight I finally did the drawstring bag project from The Crafty Gemini's YouTube video. For the most part it was pretty easy. I've started to get better at keeping my seams straight and I've slowed down on the pedal—what can I say, I have a lead foot. The end product turned out great; the video was clear and very easy to follow. Videos have been the best instructional path for me so far. I like to actually see someone going through the steps; I tend to get lost and confused when I'm just reading instructions and looking at a pattern. But I know I'll have to conquer that fear eventually, especially since I bought two pattern packages with very cute projects. Now I have an envelope pillow cover, a toddler skirt, and a drawstring bag under my belt. I believe my next challenge will be a purse. Just one of those days. It started off pretty well—my sister Susan and niece Samantha decided to come up and visit for the afternoon, which meant I had to launch into a whirlwind cleaning spree. I quickly cleaned the kitchen, living room, and bathroom. About halfway through the crazy cleaning, I decided it was Virginia's nap time. Cue the suspense music. Virginia did not like the idea of taking a nap; she had a fit, cried, and ran away. This went on for about 30 minutes. Finally, Virginia shut herself in her room and read her books. That may be our daytime compromise: if she doesn't want to nap, she has to stay quietly in her room for at least an hour. Needless to say, my nerves were shot by the time Susan and Samantha got here. Things improved for a bit. The kids played for a while, and then we headed to the Village Park Splash Pad. I had called ahead and gotten a voicemail saying the splash pad was open from 10:00 a.m. to 7:00 p.m., so we all hopped in the van and left. My first mistake was assuming Susan had a GPS—she didn't, so I tried to remember the way and my directions were a little fuzzy. Finally we managed to get there with some help from Uncle Joe, then the gray cloud above my head darkened when I saw the splash pad was closed and only open on weekends. Yeah, the playgrounds were too big for both Virginia and Samantha. So just imagine hauling two two-year-olds back into a van screaming. That was the scene before me. We headed back down the road to Dorton Park to play on the toddler playground. The heat was unbearable. We stayed for maybe 20 minutes or so. It felt like an eternity. The kids got to play and they were happy, but the heat was just too much, so we packed up and headed back to the apartment. Again, both kids were not happy with this decision. Once back at the apartment, Susan and Samantha packed up and went home. Virginia and I crashed on the couch and watched Dora the Explorer. My day was not as nice as I had hoped, but it was nice to see Susan and Samantha. Hopefully on the next visit things will be open and the weather will be cool with a nice breeze. At least I got a few nice pictures of Samantha, Virginia, and even Susan. This shirt was not my first choice to wear with this skirt. Maybe my next project will be to sew a red shirt to go with the skirt! Over the weekend, while I was down in Gastonia, I made a stop at Mary Jo's. I wanted to get a new pattern and some fabric for my next sewing project. Going to Mary Jo's is an experience. It wasn't my first time there, but I guess it was my first time there on a mission. I managed to find a dress pattern with no real help from the woman in charge of that area. I ventured into the wilds of the fabric store to decide which fabrics I wanted. My head was spinning with all the choices. I couldn't find the fabric shown on the pattern picture, but I chose some fabrics I thought would go nicely together. Once I found all the other items the pattern called for, I headed to my parents' house. When I opened the package and looked at the pattern, I knew I might have gotten in over my head. But hey, it gives me something to shoot for. Luckily, while browsing sewing sites today I found a lesson for a 20-minute skirt, so I decided to give it a try. It took me about an hour, and my seams are not perfectly straight, but the skirt fit Virginia, so I mark that as a success. I'm loving this sewing machine; I just have to pace myself and not try to do a project every week. My next project will be a purse.
Most nights I have time to start a craft project. I've been diving headfirst into learning how to sew. Nights are the best time to sew because Virginia is sound asleep; I don't have to worry about her climbing into my lap or pulling fabric off the table while I cut. At night I can enjoy the peace and quiet and focus all my attention on the YouTube instructional video or the pattern directions I'm trying to figure out. I've really enjoyed these nights so far. Tonight I finally did the drawstring bag project from The Crafty Gemini's YouTube video. For the most part it was pretty easy. I've started to get better at keeping my seams straight and I've slowed down on the pedal—what can I say, I have a lead foot. The end product turned out great; the video was clear and very easy to follow. Videos have been the best instructional path for me so far. I like to actually see someone going through the steps; I tend to get lost and confused when I'm just reading instructions and looking at a pattern. But I know I'll have to conquer that fear eventually, especially since I bought two pattern packages with very cute projects. Now I have an envelope pillow cover, a toddler skirt, and a drawstring bag under my belt. I believe my next challenge will be a purse. Just one of those days. It started off pretty well—my sister Susan and niece Samantha decided to come up and visit for the afternoon, which meant I had to launch into a whirlwind cleaning spree. I quickly cleaned the kitchen, living room, and bathroom. About halfway through the crazy cleaning, I decided it was Virginia's nap time. Cue the suspense music. Virginia did not like the idea of taking a nap; she had a fit, cried, and ran away. This went on for about thirty minutes. Finally, Virginia shut herself in her room and read her books. That may be our daytime compromise: if she doesn't want to nap, she has to stay quietly in her room for at least an hour. Needless to say, my nerves were shot by the time Susan and Samantha got here. Things improved for a bit. The kids played for a while, and then we headed to the Village Park Splash Pad. I had called ahead and gotten a voicemail saying the splash pad was open from ten colon zero zero a.m. to seven colon zero zero p.m., so we all hopped in the van and left. My first mistake was assuming Susan had a GPS—she didn't, so I tried to remember the way and my directions were a little fuzzy. Finally we managed to get there with some help from Uncle Joe, then the gray cloud above my head darkened when I saw the splash pad was closed and only open on weekends. Yeah, the playgrounds were too big for both Virginia and Samantha. So just imagine hauling two two-year-olds back into a van screaming. That was the scene before me. We headed back down the road to Dorton Park to play on the toddler playground. The heat was unbearable. We stayed for maybe twenty minutes or so. It felt like an eternity. The kids got to play and they were happy, but the heat was just too much, so we packed up and headed back to the apartment. Again, both kids were not happy with this decision. Once back at the apartment, Susan and Samantha packed up and went home. Virginia and I crashed on the couch and watched Dora the Explorer. My day was not as nice as I had hoped, but it was nice to see Susan and Samantha. Hopefully on the next visit things will be open and the weather will be cool with a nice breeze. At least I got a few nice pictures of Samantha, Virginia, and even Susan. This shirt was not my first choice to wear with this skirt. Maybe my next project will be to sew a red shirt to go with the skirt! Over the weekend, while I was down in Gastonia, I made a stop at Mary Jo's. I wanted to get a new pattern and some fabric for my next sewing project. Going to Mary Jo's is an experience. It wasn't my first time there, but I guess it was my first time there on a mission. I managed to find a dress pattern with no real help from the woman in charge of that area. I ventured into the wilds of the fabric store to decide which fabrics I wanted. My head was spinning with all the choices. I couldn't find the fabric shown on the pattern picture, but I chose some fabrics I thought would go nicely together. Once I found all the other items the pattern called for, I headed to my parents' house. When I opened the package and looked at the pattern, I knew I might have gotten in over my head. But hey, it gives me something to shoot for. Luckily, while browsing sewing sites today I found a lesson for a twenty-minute skirt, so I decided to give it a try. It took me about an hour, and my seams are not perfectly straight, but the skirt fit Virginia, so I mark that as a success. I'm loving this sewing machine; I just have to pace myself and not try to do a project every week. My next project will be a purse.
long_en_351
poet_en
917
en
When I became a parent, I never saw myself becoming one of those parents who talks about the hardships from my own childhood. You know the stories: "When I was a kid we had to walk two miles to school. In the snow. With no winter coat! You kids should consider yourselves lucky to have transportation!" Well, first of all, we hardly lived anywhere that got that much snow, and secondly we always had buses. So I never considered what I would possibly tell my own kids that would be hard for them to imagine. Maybe lack of cell phones? I suppose, but they're not old enough to care about those yet. Then last weekend when we went away I realized what it was that is so much better now: the thing that they take for granted that we parents had to suffer through—the way they watch TV. My girls never have to wait for a show to be on. They can watch whatever show whenever they want, for the most part. The only thing keeping them from doing so is me. They have no idea how good they have it, and, in turn, how bad or good it is for me. Last weekend, when we were at the hotel, we decided to take it a bit easy. Even though we wanted to get to the beach, we weren't going to wake up early and rush to get there. The girls, however, woke up early, and I decided to put something on for them to watch so I could go back to bed. Immediately, Lana yelled, "I want to watch Mickey!" "There's no Mickey here, Lana," I told her. "Why?" she asked. "Because you can only watch whatever shows are on TV right now. Nothing is recorded here." They all looked at me like I had five heads. What on earth could I be talking about? They could always watch whatever they wanted whenever, as long as I agreed. As far as they could tell, I was agreeing. What was the problem? I proceeded to find the kid-friendly channels and give them three options: PBS, HBO Family, or Disney Channel. They agreed on a show that was on at the time, and I went to lie back down. It was only 6:45—for crying out loud! After the show was over, they called for me to come put something else on. I had one of those mornings where I thought, "Really? So this is how the day is going to go?" There wasn't one story that stood out, just a bunch of little things: the girls waking up at 6:15 a.m. when they hadn't gone to sleep until almost 8:45 the night before. Thanks, babysitter! Then I lost the little bag filled with Sonya's Box Tops for the school fundraiser. They were on the counter one minute and gone the next. Sonya also couldn't find her watch, which she insisted I had put somewhere, but for the life of me I don't remember doing that. She got a rubber band very tangled in her hair. I came this close to cutting the rubber band, her hair, or both. Both the hair incident and not finding the watch led to tears, mostly because she's so tired. Some of those tears stuck around as she walked out the door to school. Andy assured me she had calmed down by the time he dropped her off, but I'm pretty sure her day is going to include at least one more meltdown. The little girls were a bit neglected while I was dealing with getting Sonya out of the house. They had finished their breakfast and wanted down, but I had hair to untangle so they would have to wait. Lana got herself down, but GG was strapped in and decided to show her unhappiness by throwing fruit all over the floor. They also had gymnastics and nobody wanted to get their leotard on, mostly Lana. After wrestling with her for a few minutes, I finally put her in time out to calm down — and to calm me down. Unfortunately, this meant we arrived at gymnastics fifteen minutes late. So yeah. Good morning! I felt like Alexander in the Terrible, Horrible, No Good, Very Bad Day. My only comfort is that it's Friday and we have a fairly low-key weekend. And since it's Friday, I will allow myself some wine this evening. Or perhaps this afternoon. When is it too early to start drinking? Lost and Found update: I found the baggie full of box tops; they had somehow ended up in one of the kitchen drawers. Sonya's watch — I suddenly remembered her taking it off. Kristi: About a month ago, I decided it would be a nice idea if just the five of us went away for the weekend. I know we went on vacation when my in-laws were here, but I wanted to do something with just our little family. I looked online and found a Marriott hotel in Newport Beach at a low price. My dad works for Marriott, so we get discounts at all Marriott hotels. You have to find the right time and the right hotel; I can't always get his rate, but most of the time I can. When I can, it's really cheap. He's never allowed to stop working there.
When I became a parent, I never saw myself becoming one of those parents who talks about the hardships from my own childhood. You know the stories: "When I was a kid we had to walk two miles to school. In the snow. With no winter coat! You kids should consider yourselves lucky to have transportation!" Well, first of all, we hardly lived anywhere that got that much snow, and secondly we always had buses. So I never considered what I would possibly tell my own kids that would be hard for them to imagine. Maybe lack of cell phones? I suppose, but they're not old enough to care about those yet. Then last weekend when we went away I realized what it was that is so much better now: the thing that they take for granted that we parents had to suffer through—the way they watch TV. My girls never have to wait for a show to be on. They can watch whatever show whenever they want, for the most part. The only thing keeping them from doing so is me. They have no idea how good they have it, and, in turn, how bad or good it is for me. Last weekend, when we were at the hotel, we decided to take it a bit easy. Even though we wanted to get to the beach, we weren't going to wake up early and rush to get there. The girls, however, woke up early, and I decided to put something on for them to watch so I could go back to bed. Immediately, Lana yelled, "I want to watch Mickey!" "There's no Mickey here, Lana," I told her. "Why?" she asked. "Because you can only watch whatever shows are on TV right now. Nothing is recorded here." They all looked at me like I had five heads. What on earth could I be talking about? They could always watch whatever they wanted whenever, as long as I agreed. As far as they could tell, I was agreeing. What was the problem? I proceeded to find the kid-friendly channels and give them three options: PBS, HBO Family, or Disney Channel. They agreed on a show that was on at the time, and I went to lie back down. It was only six forty five—for crying out loud! After the show was over, they called for me to come put something else on. I had one of those mornings where I thought, "Really? So this is how the day is going to go?" There wasn't one story that stood out, just a bunch of little things: the girls waking up at six fifteen a.m. when they hadn't gone to sleep until almost eight forty five the night before. Thanks, babysitter! Then I lost the little bag filled with Sonya's Box Tops for the school fundraiser. They were on the counter one minute and gone the next. Sonya also couldn't find her watch, which she insisted I had put somewhere, but for the life of me I don't remember doing that. She got a rubber band very tangled in her hair. I came this close to cutting the rubber band, her hair, or both. Both the hair incident and not finding the watch led to tears, mostly because she's so tired. Some of those tears stuck around as she walked out the door to school. Andy assured me she had calmed down by the time he dropped her off, but I'm pretty sure her day is going to include at least one more meltdown. The little girls were a bit neglected while I was dealing with getting Sonya out of the house. They had finished their breakfast and wanted down, but I had hair to untangle so they would have to wait. Lana got herself down, but GG was strapped in and decided to show her unhappiness by throwing fruit all over the floor. They also had gymnastics and nobody wanted to get their leotard on, mostly Lana. After wrestling with her for a few minutes, I finally put her in time out to calm down — and to calm me down. Unfortunately, this meant we arrived at gymnastics fifteen minutes late. So yeah. Good morning! I felt like Alexander in the Terrible, Horrible, No Good, Very Bad Day. My only comfort is that it's Friday and we have a fairly low-key weekend. And since it's Friday, I will allow myself some wine this evening. Or perhaps this afternoon. When is it too early to start drinking? Lost and Found update: I found the baggie full of box tops; they had somehow ended up in one of the kitchen drawers. Sonya's watch — I suddenly remembered her taking it off. Kristi: About a month ago, I decided it would be a nice idea if just the five of us went away for the weekend. I know we went on vacation when my in-laws were here, but I wanted to do something with just our little family. I looked online and found a Marriott hotel in Newport Beach at a low price. My dad works for Marriott, so we get discounts at all Marriott hotels. You have to find the right time and the right hotel; I can't always get his rate, but most of the time I can. When I can, it's really cheap. He's never allowed to stop working there.