New eyeglasses allow you to adjust prescription yourself

first_img Explore further This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. The glasses were developed by four guys; Dave Crosby, Owen Reading, Richard Taylor and Greg Storey, who found a common interest in self-adjustable glasses and in the process created a company to fulfill the goal of providing low cost eyeglasses to the millions of people the world over who cannot afford a traditional pair. Their secondary market is for people who could use adjustable glasses for other than general use purposes, such as reading, working on a computer, knitting, i.e. for people as they get older and find they have trouble focusing while performing different tasks.The Eyejusters, which come with detachable thumb-dials also come with a plastic case and special cleaning cloth. The cloth can be used to clean both sides of both lenses because the outer lens can be hinged down for easy access. The Eyejusters also offer UV protection, which has been incorporated to help prevent eye damage from the sun, another common problem in underdeveloped countries.Another interesting aspect of the adjustable glasses is that it appears with a little tweaking, they could be used to perform self exams in more developed countries. By adding a digital display, the wearer could work out their own prescription and send it to a company that sells traditional eyeglasses, sidestepping an expensive trip to an ophthalmologist. Two pairs of specs in one: Touch of finger changes prescription ( — A new kind of eyeglasses is now available from a British company that allows the wearer to adjust the prescription anytime, anywhere, via small thumb-dials on the sides. Called, Eyejusters, the glasses make use of a technology called a Slidelens, which very aptly describes how these glasses do their magic. Each lens is actually two lenses that have slightly different shapes; turning the thumb-dial causes one lens to move slightly left or right and that changes the focal point for the wearer. The lenses are moved until the person doing the focusing finds the sweet spot; which is exactly how users focus a pair of binoculars.center_img Citation: New eyeglasses allow you to adjust prescription yourself (2012, May 31) retrieved 18 August 2019 from © 2012 Phys.Org The web site for the Eyejusters says their main market is the developing world, where a lot of people with vision problems can’t afford to see an eye doctor, much less the glasses that would be prescribed. Eyejusters solve both problems; when ordered, they come with an eye-chart that can be used to help discern if a person’s vision can be corrected with Eyejusters (the power range is from +4.5 to 0 D (positive) and 0 to -5.0 D (negative)) and to figure out which version they need (for near or far vision correction).last_img read more

Lack of diversity in pygmy blue whales not due to manmade cause

first_img Explore further Pygmy blue whale. Credit: © Research team (Attard et. al) Pygmy blue whale. Credit: © Research team (Attard et. al) Citation: Lack of diversity in pygmy blue whales not due to man-made cause (2015, May 6) retrieved 18 August 2019 from This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Journal information: Biology Letters (—A team of researchers working in Australia has found via DNA analysis, that the lack of genetic diversity in pygmy blue whales is due to natural causes, not because of whaling. In their paper published in the journal Biology Letters, the researchers, affiliated with a variety of institutions in Australia, describe their study of the whales and why what they learned may help save them.center_img That is good news for the species, because it means their lack of diversity is not due to whaling, (which did reduce their numbers dramatically, along with other blue whales)—it is because they are still so new of a species. Also because they are now a protected species, the researchers believe that if they can mitigate other threats, such as different sorts of human pollution, the whales have a good chance of returning to their pre-whaling population. Australian blue whales now call Antarctica home © 2015 Pygmy blue whales are a subspecies of blue whales and contrary to their name, are not really all that small. They average 24 meters in length compared to their bigger cousins, which average 28 to 30 meters in length.For many years marine biologists have assumed that the lack of genetic diversity found in pygmy blue whales living around the shores of Australia, was due to their numbers being cut due to whaling. Now it appears that is not the case. In this new effort, the researchers sought to better understand the history of the whales and how they wound up with so little diversity. To learn more, they obtained DNA samples from several specimens and then studied patterns of genetic mutations—that allowed them to see that the whales had all come from just a few individuals, a “founder group,” beginning around 20,000 years ago. After comparing the pygmy whale DNA with other blue whales living in other parts of the world, the team was able to ascertain that they had gotten their start as Antarctic blue whales.The researchers note that 20,000 years ago, the Earth was experiencing peak glaciation, which allowed blue whales (currently the largest animal on the planet) to travel to other places, one of which was, apparently Australia. But then the glaciers retreated, leaving those that had migrated to adapt to their new environment—they evolved into the pygmy blue whales that exist today. More information: Low genetic diversity in pygmy blue whales is due to climate-induced diversification rather than anthropogenic impacts, Biology Letters, DOI: 10.1098/rsbl.2014.1037AbstractUnusually low genetic diversity can be a warning of an urgent need to mitigate causative anthropogenic activities. However, current low levels of genetic diversity in a population could also be due to natural historical events, including recent evolutionary divergence, or long-term persistence at a small population size. Here, we determine whether the relatively low genetic diversity of pygmy blue whales (Balaenoptera musculus brevicauda) in Australia is due to natural causes or overexploitation. We apply recently developed analytical approaches in the largest genetic dataset ever compiled to study blue whales (297 samples collected after whaling and representing lineages from Australia, Antarctica and Chile). We find that low levels of genetic diversity in Australia are due to a natural founder event from Antarctic blue whales (Balaenoptera musculus intermedia) that occurred around the Last Glacial Maximum, followed by evolutionary divergence. Historical climate change has therefore driven the evolution of blue whales into genetically, phenotypically and behaviourally distinct lineages that will likely be influenced by future climate change.last_img read more

Biology meet philology First application of phylogenetic evolutionary framework to color naming

first_img Journal information: Proceedings of the National Academy of Sciences (—That there are universal patterns in the naming of colors across languages has long been a topic of discussion in a range of disciplines, including anthropology, cognitive science and linguistics. However, previous color term research has not applied an evolutionary framework to the analysis of these worldwide patterns. Recently, scientists at Yale University traced the history of color systems in language by applying phylogenetic methods across a large language tree. They not only validated the phylogenetic approach to culture, but also generated a precise history of color terms across a large language sample drawn from the Pama-Nyungan languages of Australia, and moreover provided evidence supporting the loss and, as had been previously known, gain of color terms in the evolutionary process. Fig. 1. Evolutionary pathways of color term systems, after WCS. Haynie HJ, Bowern C (2016) Phylogenetic approach to the evolution of color term systems. Proc Natl Acad Sci USA 113(48):13666-13671. Fig. 2. Parameters in dependent models. Haynie HJ, Bowern C (2016) Phylogenetic approach to the evolution of color term systems. Proc Natl Acad Sci USA 113(48):13666-13671. More information: Phylogenetic approach to the evolution of color term systems, PNAS (2016) 113(48):13666-13671, doi:10.1073/pnas.1613666113Related 1Basic Color Terms: Their Universality and Evolution, (University of California Press, Berkeley, CA) 2The World Color Survey, (CSLI Publications, Stanford, CA) “Our key insights were that color terms could disappear from languages (as well as be innovated), but that this term loss is shaped by our cognitive systems. That is, our perception of the world disposes us to talk about things in certain ways. For example, humans aren’t very good at smelling (compared to hearing or seeing), so it’s not surprising that we don’t have a lot of smell-based vocabulary. The same is true for color – the color terms that are most frequently named are (with some simplification) those that refer to regions of the color spectrum that are maximally perceptually distinct from one another. It’s therefore not surprising that different languages would end up with similar color inventories. Our work shows that those same principles are at work in the ways in which color terms drop out of use.Bowern notes that the study is relevant to a number of related disciplines, including anthropology, psychology and linguistics. “Color naming has been a central question of all these disciplines for over 100 years now,” she points out. “Color was an early battleground around the idea that perception might shape language, and language perception. In essence, the debate revolved around cultural relativity and cultural differences, and whether languages that lexicalize the world in ways that differ significantly from their major European and Asian counterparts might yield crucial insights into human evolution and prehistory. “There’s also been discussion about the role of material culture in leading to color naming elaboration,” she adds. “An interesting example is purple, which in English originally referred to a particular type of ink, and was only later generalized as a color term that could refer to things other than ink. Our work shows how insights from historical linguistics give us a better idea about how these issues interact.”The pap theory1,2 play a substantial role in the development of color term systems, further study from an evolutionary perspective can refine our understanding of the interaction of cognitive constraints and language change in shaping lexical system.”Perhaps an analogy with numbers would be useful,” Bowern suggests. “It’s not surprising that so many languages have a base ten system for counting, because the ease of counting on fingers and toes makes that a very salient choice. However, languages change over time through sound and word change, and so in many languages we find clear base ten systems being obscured.” In English, she illustrates the relationship between the words three and thirty isn’t as clear as the relationship between eight and eighty. “We thought the same might be true for color – that is, while there might be cognitive constraints that lead humans to name colors in a certain way, we would also expect regular principles of language, particularly semantic, to apply – and indeed, that’s what we find: The cognitive constraints provide a framework for the types of colors that tend to appear, but languages acquire and change their color words from many sources. In short, color terms undergo broadening and narrowing, as other words do – but they do so within a framework that constrains the organization of the system.” Fig. 3. Ancestral state reconstructions on consensus tree. Haynie HJ, Bowern C (2016) Phylogenetic approach to the evolution of color term systems. Proc Natl Acad Sci USA 113(48):13666-13671. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. © 2016 Prof. Claire Bowern discussed several key elements of the paper that she and Hannah J. Haynie published in the Proceedings of the National Academy of Sciences, these being:tracking the evolution of color terms across a large language tree in order to trace the color systems historyvalidating phylogenetic approaches to culture and providing an explicit history of color terms across a large language sample when working with the Pama-Nyungan languages of Australiafinding alternative trajectories of color term evolution beyond those considered in the standard theoriesapplying Bayesian phylogenetic methods to data that bridges linguistics, cultural anthropology, and cognitive science”We had a lot of trouble gtting data for the languages, as well as interpreting the data we had,” Bowern tells “Since there’s no preexisting set of color term datasets for these languages, we had to create it from grammars, dictionaries and field notes.” Moreover, she adds, if a term wasn’t in the dataset, they had to determine if it was truly absent from the language – or if it simply was not recorded in their data. The researchers also ran into difficulty deciding if a word was truly a color term, as was the case with orange, which had to be excluded from their list of colors because too many instances of the term referred only to the fruit.”There is now a substantial body of literature on phylogenetic approaches to culture,” Bowern continues. (Phylogenetics is the study of evolutionary relationships among species, individuals or genes, as modeled on trees.) “It’s still somewhat controversial, but the critiques of phylogenetics outside of biology often focus on the differences between biological systems and other areas of change, such as in linguistics or culture.” That said, she points out that oftentimes these critiques overlook the degree of difference among biological evolutionary processes. “It’s uncontroversial that language changes; it’s also uncontroversial that languages aren’t organisms. The point is that from Darwin on, language and biology have had a long history of cross-pollination and co-inspiration.” New study reveals that prelinguistic infants can categorize colors Since the paper addresses the links between perception, language, and the categorization of the natural world, asked Bowern if more detailed perception of the environmental color spectrum leads to corresponding terms (a structural evolutionary change in the ascending visual pathway), or if terminology comes first in agglutinative languages as a complex form combining a color with a function.”I assume that it’s driven by terminology,” Bowern says. “Remember that this is not about what can be perceived – people can perceive color differences that they don’t have words for, and perception varies between individuals even when the color lexicon does not. However, within terminological change, there might be several different ways that the change could proceed. Perhaps as a term becomes more specialized to a part of a color range, an additional term is analogically brought into the system. Or, conversely, as a term becomes analogically applied to color, another term’s range is contracted.” Bowern notes that the researchers did not find evidence in the Australian data for compound colors, such as light red.Another interesting question is the dissociation between color perception and language in color blindness. “This is a good illustration of why we should usually look at the population/language level, rather than at specific individual speakers. A language still contains a word for red and green, even if some fraction of the population cannot distinguish those colors. We still have a vocabulary for music, even though some people are tone deaf. However, one could imagine a hypothetical language or dialect where enough people are color blind that those colors come to be called by the same name – but that hasn’t happened in our data set.”Bowern’s research group currently has work-in-progress in several aspects of language change related to both Australian languages, and language documentation and change in other regions. Specifically, they’re investigating ways to use computer tools to produce sound and text-aligned recordings – that is, align a recorded story with and words in the corresponding transcription. “This is incredibly useful for language research,” she tells Bowern is also continuing with her work in describing Australian languages, which already has provided some surprises. “In working on Australian language trees we found that Australia has approximately 400 languages – far more than 250 languages as was previously thought.” Not surprisingly, then, the scientists used fairly standard techniques in phylogenetics to look at color terms to address these challenges. “We defined the problem as one in which we were comparing theories, and testing which of the theories were well-supported by our data,” she explains. “For example, we compared theories where languages only lost color terms to ones where they could only gain them, or where they could both gain and lose them.” They found that their data and models did not support either of the first theories, and moreover had to find ways to model our data that were computationally tractable: The number of possible parameters for computer models with seven color terms were more than could be calculated. Citation: Biology, meet philology: First application of phylogenetic evolutionary framework to color naming (2016, December 9) retrieved 18 August 2019 from Explore furtherlast_img read more

New earthquake forecasting system gave reliable forecasts of Italian aftershocks

first_imgAfter refining their system, the researchers entered real-world data from a major earthquake that occurred in Amatrice Italy in 2016 and from the series of aftershocks that followed. The aftershocks were so unusual that earthquake experts gave them their own name: the Amatrice-Norcia seismic sequence. After running the system, the researchers found that it “issued statistically reliable and skillful space-time-magnitude forecasts” for the event.While the results are promising, the researchers are quick to point out that their system is still in the pilot stage, and thus, it is still unclear how well it might perform when used to predict aftershock patterns that occur after future earthquakes. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Explore further More information: Warner Marzocchi et al. Earthquake forecasting during the complex Amatrice-Norcia seismic sequence, Science Advances (2017). DOI: 10.1126/sciadv.1701239AbstractEarthquake forecasting is the ultimate challenge for seismologists, because it condenses the scientific knowledge about the earthquake occurrence process, and it is an essential component of any sound risk mitigation planning. It is commonly assumed that, in the short term, trustworthy earthquake forecasts are possible only for typical aftershock sequences, where the largest shock is followed by many smaller earthquakes that decay with time according to the Omori power law. We show that the current Italian operational earthquake forecasting system issued statistically reliable and skillful space-time-magnitude forecasts of the largest earthquakes during the complex 2016–2017 Amatrice-Norcia sequence, which is characterized by several bursts of seismicity and a significant deviation from the Omori law. This capability to deliver statistically reliable forecasts is an essential component of any program to assist public decision-makers and citizens in the challenging risk management of complex seismic sequences. A damaged building at Accumuli by the Amatrice earthquake (August 24, 2016). Credit: Marco Anzidei Scientists have long sought forecasting methods for earthquakes, for obvious reasons—unfortunately, to date, little progress has been made despite hundreds of years of effort. Science has advanced one area of forecasting, however: aftershocks that occur after a major quake.It has been noted that aftershocks can occur in sequences that frequently follow a power series. Because of that, seismologists could make predictions of aftershocks based on the degree of severity of the preceding major quake. But, as the researchers note, there are exceptions—sometimes, aftershocks seemingly occur randomly. Occasionally, they even precede a large quake. It was these exceptions that the researcher sought to address with a new and improved earthquake prediction system.The system built by the team is called an Operational Earthquake Forecasting (OEF) system, and is based on three predictive models. The first two are models known as epidemic-type aftershock sequences (ETAS), which use clustering information to make predictions. The third was based on the short-term earthquake probability model (STEP), which, as its name implies, is math based. Study finds earthquakes can trigger near-instantaneous aftershocks on different faultscenter_img The village of Amatrice, completely destroyed by the central Italy seismic sequence of August-October 2016. Credit: Marco Anzidei Journal information: Science Advances © 2017 (—A trio of researchers with Istituto Nazionale di Geofisica e Vulcanologia in Italy has created an earthquake warning system that reliably predicted a series of aftershocks after a major quake in Italy. In their paper published on the open-access site Science Advances, the team describes the system and its accuracy in predicting the series of aftershocks that occurred during 2016 to 2017. Citation: New earthquake forecasting system gave reliable forecasts of Italian aftershocks (2017, September 14) retrieved 18 August 2019 from read more

Astronomers detect deep long asymmetric occultation in a newly found lowmass star

first_img More information: S. Rappaport et al. Deep Long Asymmetric Occultation in EPIC 204376071. arXiv:1902.08152 [astro-ph.SR]. Artist’s conception of the knife-edge dust sheet passing in front of EPIC 204376071. Credit: Danielle Futselaar; Citation: Astronomers detect deep, long asymmetric occultation in a newly found low-mass star (2019, March 5) retrieved 18 August 2019 from Explore further Dwarf companion to EPIC 206011496 detected by astronomers Located some 440 light years away, most likely in the Upper Scorpius stellar association, EPIC 204376071 is a young (about 10 million years old) M-star with a mass of about 0.16 solar masses and a radius of approximately 0.63 solar radii. The star has an effective temperature of nearly 3,000 K, luminosity of around 0.03 solar luminosities and a rotation period of 1.63 days.EPIC 204376071 was observed by NASA’s Kepler spacecraft twice during its prolonged mission known as K2. When the star was observed by Kepler for the second time, in late 2017, a group of astronomers led by Saul Rappaport of Massachusetts Institute of Technology (MIT) identified a single occultation-like event in the light curve of the object.The detected occultation lasted for about a day and what interested researchers the most was the fact that the event was extremely deep and quite noticeably asymmetric with an egress about twice as long as the ingress.”In this work, we report the discovery of a one-day-long, 80 percent deep, occultation of a young star in the Upper Scorpius stellar association: EPIC 204376071,” the astronomers wrote in the paper.The event blocked up to about 80 percent of the light for an entire day. Besides this one-day occultation and frequent flares, as well as low-amplitude rotational modulation, EPIC 204376071 turned out to be quiet for a total of 160 days of K2’s two observational campaigns.According to the paper, there are a few things that make the detected event unique. These are the continuous coverage with half-hour sampling of the flux, the very clearly mapped-out asymmetry in the occultation profile, and weak emission identified in the Wide-field Infrared Survey Explorer (WISE) 3 and 4 bands.The researchers concluded that such a deep eclipse with these properties cannot be explained by another star crossing EPIC 204376071. They assume that this unique, very deep, long dip in the lightcurve could be caused by orbiting dust or small particles. The second theory is that a transient accretion event of dusty material near the corotation radius of the star could be responsible for the observed occultation.”We have explored two basic scenarios for producing a deep asymmetric occultation of the type observed in EPIC 204376071. In the first, we considered an intrinsically circular disk of dusty material anchored to a minor body orbiting the host star. (…) Second, we considered a dust sheet of material of basically unknown origin, though we do assume that the source of the dust is in a quasi-permanent orbit about the host star,” the paper reads.However, the astronomers added that it is too early to draw final conclusions on which of the two hypotheses is true. More studies of EPIC 204376071 are needed, in particular, radial velocity measurements to search for evidence of an orbiting body, and adaptive optics observations to search for scattered light from disk structures or evidence of low-mass wide companions. An international team of astronomers has observed a deep, day-long asymmetric occultation in a recently detected low-mass star known as EPIC 204376071. In a research paper published February 21 on, the scientists detail their finding and ponder various theories that could explain such peculiar occultation. © 2019 Science X Network This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.last_img read more

IBM announces that its System Q One quantum computer has reached its

first_img Explore further Journal information: arXiv Quantum computers are, as their name implies, computers based on quantum bits. Many physicists and computer scientists believe they will soon outperform traditional computers. Unfortunately, reaching that goal has proven to be a difficult challenge. Several big-name companies have built quantum computers, but none are ready to compete with traditional hardware just yet. These companies have, over time, come to use the number of qubits that a given quantum computer uses as a means of measuring its performance—but most in the field agree that such a number is not really a good way to compare two very different quantum computers. IBM is one of the big-name companies working to create a truly useful quantum computer, and as part of that effort, has built models that they sell or lease to other companies looking to jump on the quantum bandwagon as soon as they become viable. As part of its announcement, IBM focused specifically on the term “quantum volume”—a metric that has not previously been used in the quantum computing field. IBM claims that it is a better measure of true performance, and is therefore using the metric to show that the company’s System Q One quantum computer advancement has been following Moore’s Law. Credit: IBM Credit: IBM Credit: IBM IBM says it’s reached milestone in quantum computing As part of its announcement, IBM published an overview of the results of testing several models of its System Q One machine on its corporate blog. One such metric, notably, was “quantum volume,” a metric created by a team at IBM, which is described as accounting for “gate and measurement errors as well as device cross talk and connectivity, and circuit software compiler efficiency.” The team that created the metric wrote a paper describing the metric and how it is calculated and uploaded it to the arXiv preprint server last November. In that paper, they noted that the new metric “quantifies the largest random circuit of equal width and depth that the computer successfully implements,” and pointed out that it is also strongly tied to error rates. More information: … ower-quantum-device/ IBM has announced at this year’s American Physical Society meeting that its System Q One quantum computer has reached its “highest quantum volume to date”—a measure that the computer has doubled in performance in each of the past two years, the company reports. Citation: IBM announces that its System Q One quantum computer has reached its ‘highest quantum volume to date’ (2019, March 5) retrieved 18 August 2019 from This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. © 2019 Science X Networklast_img read more

Alipurduar conducts weeklong drive to check midday meal quality

first_imgKolkata: The Alipurduar district administration recently undertook a week-long special drive for monitoring quality of mid-day meals in schools of the district. The special drive — ‘Poshan Nirikshon’ — involved inspection of 57 schools across 6 blocks of the district.The checking criteria covered as many as 28 parameters earmarked by the district administration covering issues such as menu board being displayed, headmaster tasting the food before serving, utensils and toilet availability etc. The drive was conducted after complaints regarding quality of the midday meal were received from schools. Also Read – Rain batters Kolkata, cripples normal life”We hope this programme will be instrumental in protecting children from classroom hunger, increasing school enrollment and attendance,” a senior official of the district administration said.Due to an acute staff crunch at Sub-Inspector level, the district administration initiated the special drive where officers of collectorate acted as supplementary inspecting officers. “The team will be giving us a feedback and on the basis of which measures will be taken for improvement,” the official added. According to sources, infrastructure deficits such as faulty handpumps, lack of separate toilets, boundary wall in schools, lack of dining space, absence of eggs in meals were found in a number of schools after the team interacted with a number of students. “On the basis of the feedback, we have directed the BDO’s in charge of the respective schools to complete the task of putting the remedial measures on a war footing.We have also showcaused a number of teachers, who have been found to be deviating from the required standard,” the official added.The initiative will gradually cover all the 1,635 schools of the district.last_img read more

KMDA forms Advisory Committee to monitor health of bridges

first_imgKolkata: Kolkata Metropolitan Development Authority (KMDA) on Wednesday constituted an Advisory Committee for monitoring the health of the bridges in the city. The move comes in the wake of the Majerhat Bridge mishap on September 4.It may be mentioned that the state government has already prepared a list of 15 bridges, which need structural health audit on a priority basis. The list includes Dhakuria Bridge connecting Golpark with Dhakuria, Bijon Setu connecting Gariahat with Kasba, Aurobindo Setu connecting Gouribari with Ultadanga, Chetla Bridge connecting Kalighat with Chetla, Kalighat Bridge connecting Kalighat with Gopalnagar, Durgapur Bridge connecting New Alipore with Chetla, Ultadanga Flyover connecting EM Bypass with VIP Road, Sukanta Setu connecting Jadavpur with Santoshpur, Bankim Setu connecting Howrah Maidan with Howrah Station, Chingrighata Flyover connecting EM Bypass with Salt Lake, Sealdah Flyover, Bridge Number 4 on Park Circus rail line, Jibanananda Setu near Jadavpur police station and the arch type bridge at Karunamoyee in Tollygunge. Also Read – Rain batters Kolkata, cripples normal lifeAccording to sources in KMDA, the advisory committee will have Amitava Ghosal, a distinguished civil engineer and a member of the Technical Advisory Group for a number of new railway projects, Professor Sriman Bhattacharya or Professor Nirjhar Dhong or both from the Civil Engineering department of IIT Kharagpur, Asish Kumar Sen, chief engineer of KMDA, Bhaskar Sengupta, chief engineer (Roads and Bridges) KMDA and Samiran Sen, independent consultant and fellow of the Institute of Engineers. Also Read – Speeding Jaguar crashes into Mercedes car in Kolkata, 2 pedestrians killedThe committee will recommend the names of the agencies who will carry out structural health audits with immediate basis for a number of bridges and structures, which are in bad condition. “It will assist the authorities in interpreting the reports of the agencies and arrive at a final action plan and also suggest monitoring activities and finalise the terms of reference for the studies to be carried out, including tests to be performed. It will further recommend instrumentation that may be feasible for immediate monitoring of the bridges,” a KMDA source said.last_img read more

Youths held for assault of cop

first_imgKolkata: Two youths were arrested for allegedly assaulting a ticket inspector of West Bengal Transport Corporation (WBTC) and a police officer on Friday morning at Karunamoyee in Salt Lake. The youths were later arrested and were remanded to judicial custody till August 22 after they were produced before the Bidhannagar Court.According to police, two siblings, identified as Arijit Gupta and Soumyajit Gupta, were travelling in a bus from S-9 route on Friday morning. During the journey, a ticket inspector of WBTC, identified as Narayan Chandra Guha, boarded the bus and was checking tickets of the passengers. When he asked the duo to show their tickets, the duo stated that they haven’t purchased any tickets. When Guha asked them to purchase the tickets immediately Arijit and Soumyajit refused and used filthy languages at Guha. Also Read – Rs 13,000 crore investment to provide 2 lakh jobs: MamataHowever, Guha remained silent till the bus arrived at Karunamoyee bus terminus. There he told the duo that not purchasing a valid journey ticket would attract fine. Hearing this again the duo used abusive words. This time, when Guha protested he was allegedly assaulted by the siblings.Immediately they were detained and Bidhannagar North police station was informed. After a few minutes, an Assistant Sub-Inspector (ASI) of police identified as Arunava Pan reached the spot and intervened. Also Read – Lightning kills 8, injures 16 in stateAfter hearing about the situation, he reportedly asked the duo to pay the fine and leave. Hearing this the youths went furious and assaulted Pan with fists and blows. As a result, Pan suffered serious injuries and was rushed to Bidhannagar Sub Divisional Hospital where he was treated and discharged. Later, Guha lodged a complaint and the duo was arrested on charges of obstructing public servant in discharge of public functions, voluntarily causing hurt to deter public servant from his duty and assault or criminal force to deter public servant from discharge of his duty.last_img read more

The Real Reason Ian McKellen Turned Down the Role of Dumbledore in

first_imgSir Ian McKellen is a living legend. But when he was asked “Is there a role you’ve ever turned down because it was too puerile, too silly, just…?” by Stephen Sackur, host of the BBC’s HARDtalk. He replied: “About once a weak, yes. Oh yeah, lots of stuff.” After Richard Harris passed away in 2002 of Hodgkin’s Disease, Ian McKellen was offered the chance to portray Professor Dumbledore in the Harry Potter series. He was already playing a famed archetypal wizened wizard in a different book adaptation but could have easily inhabited yet another one. Still, he said no.Only recently and during the 20th-anniversary episode of a British talk show did the actor reveal the real reason as to why he declined one of the greatest offers that fell unexpectedly in his lap.Ian McKellen (L) as Gandalf with Elijah Wood as Frodo. (Photo by New Line/WireImage/Getty Images)“When {Richard Harris] died–he played Dumbledore, the wizard–I played the real wizard, of course,” said McKellen, referring to his career-defining portrayal of Gandalf in the Lord of the Rings trilogy. “When they called me up and said would I be interested in being in the Harry Potter films, they didn’t say what part, but I worked out what they were thinking, and I couldn’t.”A former rugby player, a prolific stage, and screen actor, as well as a great singer, Richard Harris wasn’t a knight like his colleague but lived to be acknowledged as perhaps the best Irish actor that has ever lived.He acted in about 70 movies over the course of his 50-years-long illustrious career. Highly critical of them as he was, Harris said not long before he died that “sometimes you have to make a low standard of film to sustain a high standard of living.”Richard Harris 1985 Photo by City of Boston Archives CC BY 2.0But the films Harris acted in during the later stages of his life were not of low standard. On the contrary, they were nothing short of spectacular and his performance brilliant.Think Gladiator and his portrayal of Marcus Aurelius, the Roman emperor, or Harris as Abbé Faria, the priest-philosopher made political prisoner who was thrown into Chateau d’If, the prison off the coast of Marseilles, where he was counting stones and digging holes alongside Edmond Dantes (Jim Caviezel) in Dumas’s film adaptation of The Count of Monte Cristo.The first photo of Jude Law as Dumbledore is here!And as Dumbledore, the headmaster of the wizarding school of Hogwarts, Richard Harris was just perfect.This undated file photo shows Irish actor Richard Harris in the role of Professor Dumbledore in the US film ‘Harry Potter’. (Photo by AFP/AFP/Getty Images)However, it was not meant to last, and much as he was invaluable to the story and irreplaceable to the franchise, he simply had to be replaced by someone else–which turned out to be even more problematic than the filmmakers first feared.Eventually, the role went to Michael Gambon. But before that decision, it was a challenge of recasting.Michael Gambon Photo by IamIrishwikiuser CC BY-SA 3.0“The role is so fundamental to the character and narrative of the movies, and was played so beautifully by the late Richard Harris, that the studio and filmmakers intend to make a very careful and considered choice in casting the next actor to embody the headmaster of Hogwarts School,” a spokeswoman for Warner Bros. declared after the actor died.Harris had only taken part in Harry Potter and the Philosopher’s Stone in 2001, and Harry Potter and the Chamber of Secrets the year after.Everyone saw Harris’s lifelong friend Peter O’Toole as the prime candidate to take on the role, and reportedly the actor’s family members were eager to see O’Toole as Dumbledore and wave the Elder Wand as his friend did.Peter O’Toole – 1968The two men being close and almost the same age seemed like it would make this the perfect choice. However, the years were catching up with O’Toole, so there were worries if he would be able to physically endure the six remaining films. Also, although never officially revealed, it is safe to assume that it would be tough for an actor to feel like he needs to live up to or reproduce his close friend’s brilliant acting performance.McKellen, the 78-year-old star, now a wizard with experience but in a different franchise, was then offered the chance to replace Harris, but he couldn’t out of sheer respect towards himself and the other actor. Also, McKellen apparently turned down the lucrative and pretty tempting offer claiming it was because he knew Harris didn’t think very much of his acting ability:“I couldn’t take over the part from an actor who I’d known hadn’t approved of me,” explained McKellen during the interview.Ian McKellen at the 2013 San Diego Comic-Con International. Photo: Gage SkidmoreCC BY-SA 2.0As much as Harris was critical about his own work, he was just the same if not even more so about the work of others. He once described McKellen, Sir Derek Jacobi, and Kenneth Branagh as “technically brilliant but passionless” actors.“They are technically brilliant, like Omega watches, but underneath they are hollow because their lives are hollow,” he said about them according to The Telegraph, which the actor verified by saying, “I was in a good company, yeah,” referring to himself as being part of a bunch accused of being passionless.Seemingly unfazed by it, he continued by saying, “Nonsense,” smiled, and moved on to the next question asked by his host: “You could have been Dumbledore?” McKellen grinned but gave no audible answer.Ian McKellen In Lord of The Rings (Getty Images)“When I see the posters of Mike Gambon, who gloriously played Dumbledore, I think sometimes it’s me. We get asked for each other’s autographs,” he said before concluding the interview by confirming that when his days on Earth come to an end, he would be amused if his gravestone read “Here Lies Gandalf.”Related story from us: WWII veteran Christopher Lee corrected “Lord of the Rings” director Peter Jackson on what people do when they’re knifed based on his WW2 serviceFor, as McKellen says, it is the only character he has ever played that provided him with the opportunity to “be in contact with lots of people, especially young ones from all over the world, that I couldn’t possibly know about and they let me into their lives to an extent,” emphasizing that it was a privilege to him “to be allowed to impersonate a character that already was in the zeitgeist and meant a great deal as an example of how to behave in the world.”last_img read more

This Medieval Castle Played Many Roles in Monty Python and the Holy

first_imgThe strong stone walls that protect Doune Castle might strike onlookers with a sense of familiarity — like it’s a place they’ve been before, even if they never actually have. That’s because this Scottish stronghold is a popular filming location, making appearances in Ivanhoe, Outlander, and even Game of Thrones. Located in near the city of the Stirling in Perthshire, central Scotland, the structure is more than 600-years-old, according to Historic Environment Scotland. The exceptionally maintained castle has been heavily utilized in countless period film productions. But none have been as creative in using the location as Monty Python and the Holy Grail.Monty Python And The Holy Grail, lobbycard, rear from left: Eric Idle, Michael Palin, center from left: John Cleese, Terry Jones (helmet), Graham Chapman as King Arthur (front), 1975. (Photo by LMPC via Getty Images)Equipped with their trusty coconut shells, King Arthur and his men go on a quest in search of the Holy Grail which brings them to a number of different castles — most of which are actually just Doune Castle viewed from different angles, as reported by Movie-Locations. Needless to say, the crew’s creative prowess saved the production hundreds of thousands on costs without sacrificing the authenticity of the film.The 14th century Doune Castle near Stirling, Scotland was the filming location for several scenes in Monty Python and the Holy Grail.The beautiful medieval castle has a far more colorful history though than just being a backdrop for popular films. Built on the site of an older castle, in 1361 it came into the possession of Robert Stewart, 1st Duke of Albany — who was the ruler of Scotland from 1388 until he died in 1420.Related Video: The Black Knight from Monty Python & The Holy Grail was Based on a Real PersonAfter his end the stronghold became an official royal retreat, up until the early 1600s. It was subsequently inherited by a series of nobles, until it finally found protection under the ownership of the nation in 1984. Today, Doune Castle is looked after by Historic Environment Scotland.Doune Castle, Scotland. Many of the scenes from Monty Python and the Holy Grail were shot here.The expansive castle features spacious rooms that were dressed to look like different locations for the iconic Holy Grail. For instance, the hilarious theatrically choreographed “Camelot Song” — a song and dance routine performed by the Knights of the Round Table when they first arrive at Camelot — takes place in the Great Hall, one of the best-preserved rooms in the castle.The Great Hall in Doune Castle near Stirling, Scotland. The Camelot song and dance routine was filmed here.Cinematography necessitated quite a bit of creativity in order to make Doune look unique enough to stand in as a variety of structures. One side of the castle became the fortress of Guy de Lombard, the leader of the French soldiers. The main entrance is where the (foolishly empty) wooden Trojan Rabbit is wheeled inside.Doune CastleIn one of the rooms, Castle Anthrax is easily recognizable, according to Movie-Locations. Here, Sir Galahad is saved from the “almost certain temptation” of Zoot, Midget, Crapper and the other “young blondes and brunettes, all between sixteen and nineteen-and-a-half.” (Script quotes taken from Another Bleedin Monty Python Website)There were a few other structures used for the movie. For instance, a portion of Bodiam Castle in East Sussex was shot as the exterior of Swamp Castle. But the rest of the locations in the story are, of course, shot back at the ever-versatile Doune.Bodiam Castle in East Sussex was used as the exterior of ‘Swamp Castle’ in the movieToday, Doune Castle is a popular tourist destination for fans of Monty Python, as well as the other prominent films that were filmed on site. The groups in charge of accommodating visitors have even gone the extra mile to make it a unique, interesting, and fun experience for those who wish to witness the castle’s colorful history in film.According to Undiscovered Scotland, guests can borrow coconut shells at the castle’s reception to pay tribute to the iconic sound effect used for the low-budget Monty Python and the Holy Grail.Photo by Daily Mirror/Mirrorpix/Mirrorpix via Getty ImagesThe running joke was the result of the cast having to use coconut shells to make horse-hoof sounds since they didn’t have enough budget to afford real horses, according to Mental Floss. Of course, that’s all just icing on the cake when taking the castle’s well-preserved beauty into account.Doune CastleThe stunning interiors, historical artifacts and fixtures, and period architecture inside the structure shed light on a time that is often only experienced through the magic of film and television.Read another story from us: Monty Python Tune Named Top Song to Play at FuneralsVisitors can get a sense of what real, functioning castles used to be like, and of course, achieve a feeling of oneness with the colorful stories they’ve learned to love through the screen.last_img read more

Dirk Nowitzki to the Warriors rumors make zero sense

first_img Advertisement The Warriors are licking their wounds and now have to decide whether to make any significant additions in the off season. Today, rumors surfaced in the San Jose Mercury News that Golden State has interest in speaking with Dirk Nowitzki, who opted out of his contract with the Mavericks today.One big name free-agent who probably will get a call from the Warriors in July: Dirk Nowitzki.— Tim Kawakami (@timkawakami) June 21, 2016Nowitzki seems like an odd fit for the Warriors because they need another shooter who can’t defend like a hoarder needs another ceramic animal figurine. Their Finals loss to the Cavs was largely due to a lack of interior depth, which the Cavs exploited once Andrew Bogut went down for the series with a knee injury after Game 4. Today on Speak for Yourself, Colin and Jason played “Good Read or Bad Read?” about whether Nowitzki makes any sense for the Warriors.Colin sees the Warriors adding Nowitzki as a massive overreaction and a bad read.“Bad Read. They missed out on back to back titles by a jumper. It was 89 all for about 4 minutes, and Kyrie hit a jumper, and Klay or Steph didn’t. Tweak. This is the constant overreaction. Oklahoma City. No big moves, had them down 3 – 1. Tweak. Everybody wants to blow stuff up.”Whitlock thinks the Warriors are better off standing pat, or addressing more glaring needs.“You stand pat. You don’t make a splashy move. You make a move for substance. And the substantive move to make is some toughness and some rebounding  and some defense.”Both guys agreed that an ideal addition that would address Golden State’s defense and rebounding needs is free agent Joakim Noah. He can contribute as a defender, passer, and a rebounder without needing designed plays or heavy ball possession.Substance > Splash. @ColinCowherd and @WhitlockJason discuss the type of off-season moves the Warriors should make.— Speak For Yourself (@SFY) June 22, 2016last_img read more

VIDEO John Daly pounded a drive off a beer can then pounded

first_imgCritics say that golf’s problem is a lack of personality. That’s never been a problem for John Daly. Neither has partying a like a rockstar.Daly is a traveling sideshow nowadays, but it’s a damn entertaining one.After singing ‘Knockin on Heaven’s Door‘ at an Augusta Hooters before The Masters last week, The Daly Show was down in Myrtle Beach for a some boozin’ and a little golf.Daly pulled a 21-and-over trick shot for the some curious onlookers when he teed up a drive on a Bud Light can he was drinking. He crushed the drive while barefoot and smoking a heater. After the shot, he picked up the beer and chugged it, because of course he did. Life is John Daly’s 19th hole.WATCH @PGA_JohnDaly smash a ball off a beer can and then chug it. All while barefoot. #MAM2017 #myrtlebeachgolf #bossstatus— Myrtle Beach Golf (@MB_GolfHoliday) April 10, 2017 Advertisementlast_img read more

3 Hidden Security Risks for WordPress Users

first_img Register Now » WordPress is arguably the web’s most popular content management system and blogging platform, and for good reason. The system is free, easy to use and provides a wealth of features that would otherwise cost business owners thousands of dollars in development expenses.But if something sounds too good to be true, it usually is. While the WordPress platform still represents a useful web development option for small businesses, it’s critical that you familiarize yourself with some of the system’s weaknesses to avoid its hidden security dangers.WordPress updates its platform frequently to respond to known threats but isn’t able to police every possible one. Here’s a look at the platform’s three biggest security weaknesses that you should be aware of:1. WordPress is susceptible to attacks and URL hacking.The WordPress platform executes server-side scripts in the PHP web development language, using commands sent via what are called URL parameters to control the behavior of the MySQL databases that store your site’s data.If that all sounds pretty technical, don’t worry. You don’t need to understand web coding to protect your site. What you do need to know is that this type of website structure is vulnerable to a certain type of attack. Hackers can use malicious URL parameters to reveal sensitive database content, a process known as “SQL injection attacks.” Once hackers have this information, they can hijack your site and replace your content with spam or malware.Related: 3 Tips for Beefing Up Password Security (Infographic)To protect your WordPress site from such an attack, consider modifying your site’s .htaccess file, which is a configuration file that enables you to control the way your hosting server behaves. You can prevent hackers’ URL parameter requests from succeeding by including the code found here.Note that this code is intended for WordPress owners who are using Apache-based web hosting. If you aren’t sure what type of hosting you use or if you need assistance in modifying your site’s .htaccess file, contact your web hosting provider’s support team or a private web developer.2. Free WordPress themes frequently contain security exploits.One of the biggest benefits of WordPress is that you can install it for free, use free plugins to add functions and download free theme files to give your site an appealing look. Unfortunately, unscrupulous developers have laced downloadable theme files with everything from undetectable spam links to malware files that infect a site once the theme is installed.Related: What ‘DDoS’ Attacks Are and How to Survive ThemTo keep your website safe, download files only from sources you know and trust. Paid themes represent less of a security risk than free themes. But if you want free themes, you can scan them for malware before uploading them to detect any attacks that may have already occurred using the anti-virus program installed on your computer.3. WordPress’s default login process can be easily hacked.All WordPress dashboard logins are located at the same address across URLs, meaning that nearly every WordPress login page can be found here. Also, WordPress’s default settings don’t allow for secure logins. This means a site running on the WordPress platform may be susceptible to “brute force” attacks, in which bot programs try various login combinations in the hope that one lucky combination will allow access to the site.To get a feel for how prevalent these attacks can be, consider that the sites hosted by popular blogging site Copyblogger experience between 50,000 and 180,000 unauthorized login attempts each day.To protect your website, install the Limit Login Attempts plugin. In addition, you can work with your web hosting provider to block IP addresses that make multiple unsuccessful login attempts.While it might sound like a lot of work to take these precautions, you can expend much more time and effort trying to fix your site if you wind up the victim of a successful hacking attempt.Related: Why You Might Need to Rethink Your Internet Security — Now October 10, 2012 Free Webinar | Sept. 9: The Entrepreneur’s Playbook for Going Global 4 min read Growing a business sometimes requires thinking outside the box. Opinions expressed by Entrepreneur contributors are their own.last_img read more

Kim Kardashian and Kanye West Sue YouTube CoFounder Over Leaked Video

first_img Register Now » 2 min read Growing a business sometimes requires thinking outside the box. Free Webinar | Sept. 9: The Entrepreneur’s Playbook for Going Globalcenter_img November 1, 2013 Oft-exposed Kim Kardashian and Kanye West have slapped YouTube co-founder Chad Hurley with a lawsuit for exposing a bit more of their private lives than they wanted.West and Kardashian — who, one recalls, had her own career launched because of her, um, performance on an amateur video — are suing Hurley for allegedly leaking a video of their marriage proposal online through his new video-app project called MixBit.The duo claim techie Hurley wasn’t even invited to the Oct. 21 event, though in attendance were Alison Pincus, owner of shopping website One Kings Lane  and wife of Mark Pincus, in addition to Rap Genius co-founder Mahbod Moghadam and VC Ben Horowitz. But Hurley gained access after signing a non-disclosure agreement forbidding him from posting videos or images of the extravaganza. The court filing even shows a picture of Hurley holding up the signed agreement.Apparently, Hurley took his chances.Related: YouTube Co-founder Announces New Video Service The two-and-a-half-minute video titled “PLEEESE MARRY MEEE!!! Congrats Kim and Kanye!” shows West proposing to his baby momma at AT&T Park in San Francisco, as a 50-piece orchestra plays Lana Del Rey’s “Young and Beautiful.” (It was reported Del Ray flat our rejected West’s request to perform at that event).  Hurley tweeted out the clip to his more than 500,000 followers, which has managed to rack up more than 1.6 million views.So this happened last night. Congrats Kim & Kanye!— Chad Hurley (@Chad_Hurley) October 23, 2013Kimye’s lawyer claims Hurley did this out of desperation, to save Mixbit.”Following a lackluster launch and unsuccessful ensuing debut, Hurley sought to salvage MixBit from its dour beginning,” the lawsuit states. “An opportunity to do so appeared to Hurley when he learned of the October 21, 2013, event featuring West and Kardashian,” it continues.Related: YouTube Founders Launch MixBit Video App to Rival Vine and Instagram It seems unlikely that the lawsuit stemmed from privacy concerns but rather they, not Hurley, wanted to cash in on the event, as The Keeping Up with the Kardashians crew was there filming every precious moment.Kardashian and West are seeking unspecified damages. last_img read more

Drone Accidents Not Your Fault

first_imgAugust 25, 2016 2 min read Register Now » Free Webinar | Sept 5: Tips and Tools for Making Progress Toward Important Goals Attend this free webinar and learn how you can maximize efficiency while getting the most critical things done right. This story originally appeared on PCMag Did you buzz your neighbor’s shrubs or crash into your garage with a drone? Good news: it might not be your fault!Researchers at Australia’s RMIT University’s School of Engineering and Edith Cowan University found that many drone accidents are due to the gadgets themselves rather than human error. Dr. Graham Wild and Dr. Glenn Baxter from RMIT and John Murray of ECU studied 150 civil incidents involving drones and found that 64 percent occurred due to technical problems. The study, published in the journal Aerospace, pointed to “broken communications links” between the operator and the drone. As a result, the researchers think commercial aircraft-type regulations should be applied to the communications systems of drones.”Large transport category aircraft, such as those from a Boeing or Airbus, are required to have triple redundant systems for their communications,” Dr. Wild said in a statement. “But drones don’t and some of the improvements that have reduced the risks in those aircraft could also be used to improve the safety of drones.”Exceptions can be made for drones under 55 pounds, he said, though pilots should have licenses. “It’s essential that our safety regulations keep up with this rapidly growing industry,” Dr. Wild argued.The team started its research after reports of a drone collision at Heathrow Airport, though authorities later said it was “not a drone incident.” Still, collisions do occur (see the video below), hitting everything from ferris wheels and the Empire State Building to the White House lawn (and there are certainly some dumbasses flying drones who should not be).last_img read more

5 Ways Artificial Intelligence Is Already Transforming the Banking Industry

first_img Artificial Intelligence (AI) — and its growing impact on and applicability for individuals and businesses alike — is one of today’s most widely discussed topics. From virtual assistants like Siri and Alexa, to chatbots created by Facebook and Drift, AI is having a significant impact on the lives of consumers.Related: #Five Technology Trends That Will Disrupt Your Banking StyleA study from Statista showed that the number of consumers using virtual assistants worldwide is expected to exceed one billion in 2018. Additionally, a 2018 survey by Accenture projected that 37 percent of U.S. consumers will own a digital voice assistant (DVA) device by the end of 2018.It is readily apparent how AI-powered technology is making inroads into everyday life through DVAs and other consumer products, but AI is also having a transformative effect on an industry that impacts virtually all consumers and businesses: banking. Here are five ways that AI is already transforming the banking industry.Customer service automationAs natural language processing technology evolves, consumers find it increasingly difficult to distinguish between a voice bot and a human customer service representative. This stems from improved abilities on the part of voice and chatbots to resolve customer issues without human intervention.The benefits to banks of customer service automation are obvious: AI could lead to significant cost reductions. A recent study by Autonomous predicted that AI could lead to 1.2 million jobs being cut in the banking and lending industry, resulting in up to $450 billion in industrysavings by 2030.Despite the potential rewards customer service automation promises, banks and other businesses need to proceed with caution in relying too heavily on voice and chatbots. The popularity of GetHuman illustrates this: It’s a website that connects consumers with human CSRs to resolve their issue. In fact, voice and chatbots often work best when augmenting rather than replacing humans. At a minimum, the option to speak to a human, if necessary, should be readily available. Related: This Banker Explains Why Confluence of Fintech and Banks is Unavoidable Want an example of how banks are creatively employing AI to serve customers? The Swiss bank UBS, ranked number 35 globally for its volume of assets, according to Accuity’s August 2018 global bank rankings — has partnered with Amazon to incorporate its “Ask UBS” service into Alexa-powered Echo speaker devices.Ask UBS, which is aimed at UBS’s European wealth management clients, enables users to receive wide-ranging advice and analysis on global financial markets just by “asking” Alexa. “Ask UBS” also acts as a teaching resource, offering definitions and examples of acronyms and jargon related to the financial industry.While Ask UBS can make a call from a UBS financial advisor to a customer’s phone upon request, it is not yet able to access individual portfolios, execute trades or perform other transactions; it can’t offer personalized advice based on a client’s holdings and goals. According to the Wall Street Journal, the reason is primarily security and privacy concerns. More in-depth and personalized service through a DVA may not be far off, though. In the article, a UBS spokesman stated that the company’s aim is to make “Ask UBS” and similar tools “secure, compliant, and trustable for clients.”PersonalizationBanks have access to a wealth of customer data, including detailed demographics, website analytics and records of online and offline transactions. By utilizing machine learning to integrate and analyze information from multiple, discrete databases to form a 360-degree customer view, banks are better positioned to personalize products, services and interactions based on the behavior of individual clients.According to James Eardley, global director of industry marketing for enterprise software giant SAP, “The next step within the digital service model is for banks to price for the individual, and to negotiate that price in real time, taking personalization to the ultimate level.”While personalized pricing of this kind may only become prevalent in the future, banks are already utilizing AI-processed behavioral data to advise individual clients on appropriate credit and savings products, based on their goals and habits. Santander, the world’s 14th largest bank, measured by its current assets, even hosted a competition, with a prize of $60,000, on the machine learning crowdsourcing site Kaggle, encouraging data scientists to write code that better “pairs products with people.”SecurityIn the banking and payments industry, personalization extends far beyond marketing and product customization, into security. A growing number of banks are utilizing biometric data, like fingerprints, to replace or augment passwords and other forms of client verification.A report by Goode Intelligence forecast that 1.9 billion bank customers will be using some form of biometric identification by 2021. The Guardian reported that U.K. bank Halifax even experimented with Bluetooth wristbands that identified a client’s unique heartbeat to authenticate account access.In a widely discussed innovation to its popular iPhone, Apple has evolved its Face ID so that it now uses AI-powered facial-recognition technology to unlock the device as well as validate purchases using Apple Pay, its digital wallet service. As facial recognition and other biometric authentication techniques become more sophisticated and secure, they are poised to become increasingly commonplace.Process OptimizationOne of the most promising applications of AI in banking comes from automating high-volume, low-value processes. In one example, reported by McKinsey, JPMorgan began using bots to process internal IT requests, including employees’ attempts to reset their work passwords. Up to 1.7 million requests were expected to be handled by the bots in 2017, doing the work of 40 full-time employees.Pattern recognition and fraud preventionThe ability of AI to sift through massive amounts of data and identify patterns that might elude human observers is one of its greatest strengths. One area where this capacity is particularly relevant is in fraud prevention.According to McAfee, cybercrime costs the global economy $600 billion. AI and machine learning solutions are being deployed by many financial service providers to detect fraud in real time. An additional benefit of improved fraud prevention technology is that legitimate activity is flagged as fraudulent less often.According to Tech2, Mastercard was able to reduce “false declines” for its customers by 80 percent using AI technology.Final thoughtsThe fintech revolution is still in its infancy, but alongside AI, it has already had a substantial impact on the way traditional banks do business. This presents digital entrepreneurs and investors with myriad opportunities for improvement.Related: #5 Vital Points for Creating a Banking ChatbotAccording to CB Insights, the first quarter of 2018 alone saw a global record $5.4 billion in funds raised by VC-backed fintech companies. Having a high-level understanding of the goals big banks are looking to achieve with AI — including customer-service automation, personalization, improved security, process optimization and pattern recognition — hopefully provides food for thought and inspiration for digital entrepreneurs attracted to the fintech space. September 12, 2018 Opinions expressed by Entrepreneur contributors are their own. Free Webinar | Sept 5: Tips and Tools for Making Progress Toward Important Goals Register Now » Attend this free webinar and learn how you can maximize efficiency while getting the most critical things done right. 6 min readlast_img read more

Major Chromebook Update Adds Android Pie and Google Assistant Report

first_img Henry T. Casey, by Taboolaby TaboolaSponsored LinksSponsored LinksPromoted LinksPromoted LinksYou May LikeKelley Blue Book5 Mid-engine Corvettes That Weren’tKelley Blue BookUndoGrepolis – Free Online GameGamers Around the World Have Been Waiting for this GameGrepolis – Free Online GameUndoTODAYPolice Identify Girl Licking Ice Cream Tub In Viral VideoTODAYUndoForbes.com100 Powerful Women Americans Can Look Up toForbes.comUndoMy Food and FamilyHealthy, Homemade Drunken Thai Noodles In Just 20 MinutesMy Food and FamilyUndoVerizon WirelessThis new phone will blow your mind.Verizon WirelessUndoAdvertisement A huge new update — Chrome OS 72 — is taking flagship Google features to more Chromebooks. Specifically, that’s version 72.0.3626.97 of Chrome OS, which according to one report, brings Google Assistant and Android 9 Pie to select devices. Oddly, this news isn’t being promoted by Google itself, as the release notes don’t pack that tidbit. Instead, we’re learning it from, which notes that Android 9 Pie is a surprise perk of this new update, which also bakes in Google Assistant, which will not be limited to just Google’s own Pixelbook and Pixel Slate.MacBook Air vs MacBook Pro: Which 13-inch MacBook Is Right For You?Apple’s entry-level MacBook Air and Pro look pretty similar, but our testing proved they differ in crucial ways.Your Recommended PlaylistVolume 0%Press shift question mark to access a list of keyboard shortcutsKeyboard Shortcutsplay/pauseincrease volumedecrease volumeseek forwardsseek backwardstoggle captionstoggle fullscreenmute/unmuteseek to %SPACE↑↓→←cfm0-9接下来播放Which Cheap Tablet Is Best? Amazon Fire 7 vs Walmart Onn02:45关闭选项Automated Captions – en-USAutomated Captions – en-USAutomated Captions – en-US facebook twitter 发邮件 reddit 链接已复制直播00:0003:4603:46 MORE: Should I Buy a Chromebook? Buying Guide and AdviceInstead, it appears that these Chrome 72 perks are rolling out to select devices, and you can learn about how it effects your machine by opening About Device.Another important feature that arrives with Chrome 72 is external drive support for Android apps. That means you can access files from your accessories in apps emulated via the Google Play store. Even the Pixel Slate, during my testing, was missing this capability.The release notes also state that “Chrome browser has been optimized for touchscreen devices in tablet mode.” We hope this translates to reduced stutter, which Google was reportedly working on.Best Chromebook: Reviews and Comparisons  Author Biocenter_img Henry is a senior writer at Laptop Mag, covering security, Apple and operating systems. Prior to joining Laptop Mag — where he’s the self-described Rare Oreo Expert — he reviewed software and hardware for TechRadar Pro, and interviewed artists for Patek Philippe International Magazine. You can find him at your local pro wrestling events, and wondering why Apple decided to ditch its MagSafe power adapters. Henry T. Casey, onlast_img read more

How to chain VPN servers

first_imgHow to chain VPN servers by Martin Brinkmann on May 19, 2016 in Security – 23 commentsVPN Chaining is a technique in which multiple virtual private network (VPN) servers are chained to improve online privacy while on the Internet.Basically, what it means is that you are not connecting to a single VPN but to multiple ones in a layered system that looks like Your PC > 1st VPN > 2nd VPN > Internet.Before we take a look at the how, we should discuss why you would want to do that. One argument is that you cannot trust any of the VPN providers out there.While most claim these days that they don’t log, there is virtually no way to prove that this is indeed the case.And even if they don’t log user activity, they may still be forced to cooperate and log activity of certain users connecting to the system, for instance when forced to do so by a court of law or when coerced.VPN Chaining improves privacy by connecting to multiple VPN servers operated by different companies who — preferably — operate in different jurisdictions.The advantage is that it becomes increasingly difficult to track users when they chain VPN servers.There are disadvantages however, for instance that the setup is complicated, that maintaining multiple VPN accounts is more expensive than just one, and that there is still a possibility of being tracked.AdvantagesDisadvantagesImproved privacycomplicated setupmore expensive (unless free services are used)slower speeds, higher latencyPossibility of being tracked is still thereHow to chain VPN serversUnless you operate all VPN servers that you want to chain, you cannot simply connect to the first VPN in the chain and be done with it.Connecting to multiple VPNs simultaneously on the same device does not work as well which that leaves virtual machines as the best solution to get the ball rolling.Basically, you connect to one VPN on the device you are using, and to others that you want as part of the chain in virtual machines.A simple chain would look like this: PC > 1st VPN > Virtual Machine > 2nd VPN > InternetYou would have to perform all activity using the Virtual Machine to take advantage of the chaining.How it works:Download VirtualBox from the official website and install the virtualization software.Download and install an operating system, Linux Mint for instance, in VirtualBox.Get accounts at two or more VPN services. You get big discounts at Ghacks Deals currently for select VPN providers.Connect to the first VPN on the device you are using.Connect to the second VPN in the Virtual Machine. If you have followed the suggestion above, connect to the VPN using Linux Mint.You can verify that the VPNs are chained by checking IP addresses. You will notice that the host device returns a different public IP than the virtual device.Crazy chaining: you can add as many VPN services to the chain as you like, but you need to install a virtual machine inside the virtual machine for each of them.Installation of VirtualBox and the host operating system should not pose problems to most users. The installation of the VPN service on the other hand may, but most VPN providers offer instructions on their web pages that detail the installation process on various operating systems including Linux.Closing WordsVPN Chaining improves online privacy and while it does not offer 100% protection, it offers far better protection than a single VPN (which in turn offers better protection than connecting directly to the Internet).Now You: Do you use a VPN?SummaryArticle NameHow to chain VPN serversDescriptionFind out how to chain VPN servers using two or more VPN providers and virtual machines on devices you operate to improve online privacy.Author Martin BrinkmannPublisher Ghacks Technology NewsLogo Advertisementlast_img read more

Test hard drives for bad sectors with Hard Disk Validator

first_imgTest hard drives for bad sectors with Hard Disk Validator by Martin Brinkmann on July 30, 2018 in Software – 24 commentsHard Disk Validator is a free portable program for Microsoft Windows devices to test any connected hard drive for bad sectors and related issues.Failing hard drives are quite the problem. While it is possible to mitigate data loss with the creation of regular backups, it is also necessary to find a suitable replacement for the drive, connect it to the PC and migrate data to it.Hard drives are made up of sectors that data gets written to and checksums that should match the data of a sector. The checksum data does not match the data of the sector in bad sectors; this can be caused by power outages, unexpected restarts, failing hard drives, and other issues, for instance the one that throws “The memory could not be written” error messages.Hard Disk Validator may be used to run a series of checks on hard drives to find out of they have bad sectors or are becoming less reliable in other areas.You can run Hard Disk Validator directly after you have downloaded the archive to the local system and extracted it. Note that it requires an older version of the .Net Framework, and that it may be installed during setup on newer versions of the Windows operating system.We have reviewed comparable programs in the best. Check out HDDScan, Disk Scanner, or HDD Guardian to name just a few.Hard Disk ValidatorThe program interface is straightforward. Select one of the connected drives at the top, and then one of the available test scenarios on the right. Note that the developer suggests to run only read tests on the operating system drive. He suggests to either connect the drive to a secondary PC to run the test, or boot into a recovery environment to run it from there.As always, it is recommended to create a backup of the entire hard drive before you use Hard Disk Validator.You may run the following four operations:Read — tests read capabilities. Will test all sectors of the hard drive to find bad sectors.Read – Wipe Damaged – Read — Same as above. Only difference is that the program attempts to overwrite bad sectors to read again from them to verify if they are okay.Read – Write – Verify – Restore — Writes test patterns to disk to verify them to make sure sectors are okay. Restores the original data afterward.Write – Verify — Same as above but without the restoration of the original data.Tests take different execution times with read being the fastest. The program displays all sectors of the hard drive and uses color codes to indicate the status of the sector. Green means everything is okay and red that the sector is damaged.Closing WordsHard Disk Validator is a program that you run when you suspect that a hard drive may be failing, or for verification on a regular schedule. There is no option to schedule scans so that you have to run the program manually whenever you want to verify hard drives.The program reveals bad sectors of hard drives to you and may be used to fix them if the issue has not been caused by hardware failures.Now You: Do you check your drives regularly?Summary12345 Author Rating3.5 based on 3 votes Software Name Hard Disk ValidatorOperating System WindowsSoftware Category AdministrationLanding Page Advertisementlast_img read more