You are watching an optimisation algorithm come up with the best design completely automatically. The outcome is greatest stiffness shape possible for a given amount of material. And amazingly it’s a nuanced truss that isn’t far removed from the look of most motorway bridges. That’s pretty reassuring, actually.
This sample 2D image was made with ToPy – open source Python ‘Topology Optimisation’ code.
Ref: Algorithms that design structures better than engineers – Jordan Burgess
New crime prediction software being rolled out in the nation’s capital should reduce not only the murder rate, but the rate of many other crimes as well.
Developed by Richard Berk, a professor at the University of Pennsylvania, the software is already used in Baltimore and Philadelphia to predict which individuals on probation or parole are most likely to murder and to be murdered.
In his latest version, the one being implemented in D.C., Berk goes even further, identifying the individuals most likely to commit crimes other than murder.
If the software proves successful, it could influence sentencing recommendations and bail amounts.
Beginning several years ago, the researchers assembled a dataset of more than 60,000 various crimes, including homicides. Using an algorithm they developed, they found a subset of people much more likely to commit homicide when paroled or probated. Instead of finding one murderer in 100, the UPenn researchers could identify eight future murderers out of 100.
Ref: Software Predicts Criminal Behavior – ABC News (via DarkGovernment)
The Google effect is the tendency to forget information that can be easily found using internet search engines such as Google, instead of remembering it.
The phenomenon was described and named by Betsy Sparrow (Columbia), Jenny Liu (Wisconsin) and Daniel M. Wegner (Harvard) in July 2011.
Having easy access to the Internet, the study showed, makes people less likely to remember certain details they believe will be accessible online. People can still remember, because they will remember what they cannot find online. They also remember how to find what they need on the Internet. Sparrow said this made the Internet a type of transactive memory. One result of this phenomenon is dependence on the Internet; if an online connection is lost, the researchers said, it is similar to losing a friend.
Ref: Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips – ScienceMag
Could software agents/bots have bias?
This question is addressed by Nick Diakopoulos in his article ‘Understanding bias in computational news media‘. Even if the article focus on algorithms related with news (ie: Google News), it is interesting to ask this question for any kind of algorithms. Could algorithms have their own politic?
Even robots have biases.
Any decision process, whether human or algorithm, about what to include, exclude, or emphasize — processes of which Google News has many — has the potential to introduce bias. What’s interesting in terms of algorithms though is that the decision criteria available to the algorithm may appear innocuous while at the same time resulting in output that is perceived as biased.
Algorithms may lack the semantics for understanding higher-order concepts like stereotypes or racism — but if, for instance, the simple and measurable criteria they use to exclude information from visibility somehow do correlate with race divides, they might appear to have a racial bias. […] In a story about the Israeli-Palestinian conflict, say, is it possible their algorithm might disproportionately select sentences that serve to emphasize one side over the other?
For exemple, could we say that the politic of the RiceMaker algorithm – that automates the vocabulary game on FreeRice to generate rice donations – has a ‘left-wing’ political orientation?
Ref: Understanding bias in computational news media – Nieman Journalism Lab
Ref: RiceMaker – via #algopop
How to make sense of Philadelphia’s City Council district map?
Even with the best of intentions, districting problems can be difficult to solve because they are so complex, says Kimbrough, who specializes in computational intelligence. The key to finding the best solution, he suggests, is to start with not one but many good solutions, and let decision makers tweak plans from there.
The team created a genetic algorithms that mimics evolution and natural selection of the various districts and proposes endless solutions/variations from just a few good beginnings.
The team selected then 116 of these variations which were very good. There is now material for human decision-makers to take out a decision out of these various decisions took by an algorithm.
This is an interesting example where humans and algorithms are working together to solve a problem.
“In the end, there are a lot of human judgments that go on here,” notes Murphy. “What reallyis that neighborhood? Can you split the wards?…. Generating one solution is not a good idea because there are all these side issues that you can’t represent mathematically. This always happens, whether in political districting or in commercial applications.”
Ref: A New Approach to Decision Making: When 116 Solutions Are Better Than One – Knowledge Wharton