The intersection point on this graph is the end of the world as we know it.
The security of the internet based on existing cryptography that we’ve had since day one will be permanently over.
Military, banking, private messaging, identity, medical and legal records all exposed. Only post-quantum encryption algorithms will save us.
Still no solid plan for this from any major corporation or institution let alone any government.
Recently I took a fresh at look at the Go programming language after having last taken a look at it before version 1.0 was released back in 2009. What I’ve found in just a few days is shocking in a really really good way.
The questionable widespread technical interview process has it’s roots in a cold war era approach that was pioneered by a public proponent of eugenics. Minorities suffer the most from the industry continuing to use this long disproved ritual of conformity.
First, thank you for reaching out. People like you help other people make a living by matching them with companies and opportunities.
If you got a link to this post from me that means that I read your email and find it irrelevant for my profile. This is usually the case because:
Your role does not fit my experience. In many of my roles I led a team of engineers, managers and product managers. Is the scope you offer me similar? How do you think that being a front-end engineer in your team matches my current level and experience?
Your compensation is off. If you are trying to recruit top talent from a FAANG company you should be aware about their compensation expectations. Levels.fyi can give you an idea of what is expected in each level. Can this company offer a similar (or higher) pay to L6-7/M1-2 on Facebook/Google/Amazon? Also, while I’m sure your startup is about to take over the world- I’m not looking to get paid in virtual money stocks.
I don’t do meetings My time is important. Yours too. Time is the only currency we have. From reading your message I can see that there’s a low chance for a match so let’s save time and skip the meeting.
How can you do better next time? Invest more in research before reaching out. Spend most of your time on it. Don’t send generic messages to every engineer in your LinkedIn filter list. First, it will probably piss them off, second you will end up in the spam folder. Read about each potential candidate, evaluate their experience and only if you’re 80% sure there’s a match send them an email. The email should be catered towards their experience, showing how it can match this new role. Also do some homework on compensation and make sure the company can match the pay for the talent you are targeting. I do believe that a personalized message will get you a better response rate with the candidates you approach. Even if it will not be a fit, you will establish a good relationship with them. This can be very useful for the future.
Most #ShitCoin consensus algorithms are based on Proof Of Stake. This is no coincidence. This allows for centralized control of what are essentially just digital fiat currencies and enable the same fraud we’ve seen in most economies that have their paper fiat currency gamed by banks and government.
The loudest voices on blockchain tech know the least. Proof Of Stake chains rest on circular logic fallacies and a misunderstanding of fungibility and forks. Fatal flaws that will not be fixed by rearranging security deposits. The deck chairs on the Titanic were very neatly organized.
The Golden 20s of the 20th century began with the end of WW1 and ended with the stock market crash of 1929. It was considered the decade that was the calm before the storm. However, it was a decade of sweeping technological change and innovation that transformed industrial productivity, life in general and in turn the world economy. It improved the general well-being of many. Electricity, the internal combustion engine and radio changed everyday life forever.Still, a lot has changed in a hundred years since then. The invention of the transistor happened over 70 years ago and the Internet has been in the mainstream for 25 years now. Commodity cloud computing has been around for over 15 years, thanks AWS. Looking back at the 2010s though there have not been any earth-shattering technological inventions. Don’t get me wrong, there have been numerous incredible improvements to existing technology in cost, speed and pervasiveness. The focus has been more horizontal than vertical. My predictions for the 20s are that we are going to see some major changes technologically. Why ? There are just too many forcing functions that can no longer be ignored in the tech industry. Corporations that sell user data, corporations that ignore data security are all losing market share and quickly at that. Internet-enabled cyber-crime estimates from the FBI put the growing cost at around $10 billion for the 2010s and that number will only continue to rise. The rapid growth of IoT devices compounds this issue.
The sooner a language is formally specified and standardized the better. I don’t want to see Rust go the way of languages like Ruby(which eventually got a spec) or Clojure (still does not have one). Without this and the longer it’s delayed, external libraries spring up to solve problems that should be fully or partially addressed in the core of the language. A formal language spec along with standardization would also potentially inform much higher quality crate design while also allowing quicker progress on many fronts due to less ambiguity. It also would allow for multiple tool-chain implementations e.g. compilers.
TLDR; This problem has been solved for 40 years but the software development industry is still very fashion-oriented.
In both Common Lisp and Smalltalk error handlers can resume the computation that raised it, restart it, or refuse to handle it while the error handling code itself can be separated out from where it is caught i.e. into re-usable functions. Without unwinding the stack and losing the context of why/where the error occurred.
Ada (previously) is a time-tested, safe, secure programming software-engineering language with a 40-year record of success in mission-critical applications such as…
Air Traffic Management Systems
Desktop and Web Applications
Commercial Aviation
Banking and Financial Systems
Railway Transportation
Information Systems
Commercial Rockets
Commercial Shipboard Control Systems
Commercial Imaging Space Vehicles
Television/Entertainment Industry
Communication and Navigational Satellites and Receivers
Medical Industry
Data Communications
General Industry
Scientific Space Vehicles
Military Applications
How good is Ada though for something that most programmers might work on in their day to day ? Something like building a REST service ? Well here’s the hello-world of a REST service in Ada.
The Raspberry Pi 4 is a leap forward not just for the Pi but for single-board computers across the board. It’s a great light-weight desktop replacement. It’s even surprised me as a viable Rust / Ada development environment.
My setup includes a Raspberry Pi 4 FLIRC case which is basically a giant aluminum heat sink. This allows for a completely silent setup running at an average of 54’C. During extended code-compiles that tops out around 65’C which is more than enough cooling and completely worth the silence when compared with a fan.
Ada is the only pragmatic language that is still growing in a healthy way that meets the Steelman language requirements (created by US DoD circa 1978). Ada is rare among programming languages in that it is one of the few that was designed up-front according to a years-long well defined specification. Rust, at least aspires to meeting some of these requirements, even if not intentionally.
Crucial in the Steelman requirements were:
A general, flexible design that adapts to satisfy the needs of embedded computer applications.
Reliability. The language should aid the design and development of reliable programs.
Ease of maintainability. Code should be readable and programming decisions explicit.
Easy to produce efficient code with. Inefficient constructs should be easily identifiable.
No unnecessary complexity. Semantic structure should be consistent and minimize the number of concepts.
Easy to implement the language specification. All features should be easy to understand.
Machine independence. The language shall not be bound to any hardware or OS details.
Complete definition. All parts of the language shall be fully and unambiguously defined.
As I’ve been spending a lot of time with Arm hardware lately as my primary desktop and server platform I missed using my secure messenger app of choice. I was able to cook up builds for both armv7l/armhf/GNU Linux (32-bit) and arm64/aarch64/GNU Linux (64-bit).
These are unofficial but fully working builds of Signal Desktop for Linux on Arm processors. Enjoy !
Here are some initial thoughts on Rust in the almost two years since I last looked at it along with some implementations of merge and quick sort. (These are just my opinions so please don’t panic !)
1. Cargo is awesome for managing package dependencies and building projects.
2. Rust is a very nice systems programming language that supports a functional style of programming. Much easier to work with than C/C++ with very near the same performance in many cases.
3. Strong static inferred type system !
4. The authors have not neglected the rich and varied history of programming language theory while trying to be very pragmatic about the design and usefulness of the language.
Let’s look at implementing some popular sorting algorithms. First … quick sort.
I won’t be pedantic here by explaining the quick sort algorithm but an elegant and performant implementation is ~20 LOC. Not bad for a systems level programming language.
A few months back I took a look at Elixir. More recently I’ve been exploring F# and I’m very pleased with the experience so far. Here is the ring probabilities algorithm implemented using F#. It’s unlikely that I will ever use Elixir again because having a powerful static type system provided by F# at my disposal is just too good.
let rec calcStateProbs (prob: float, i: int,
currProbs: float [], newProbs: float []) =
if i < 0 then
newProbs
else
let maxIndex = currProbs.Length-1
// Match prev, next probs based on the fact that this is a
// ring structure.
let (prevProb, nextProb) =
match i with
| i when i = maxIndex -> (currProbs.[i-1], currProbs.[0])
| 0 -> (currProbs.[maxIndex], currProbs.[i+1])
| _ -> (currProbs.[i-1], currProbs.[i+1])
let newProb = prob * prevProb + (1.0 - prob) * nextProb
Array.set newProbs i newProb
calcStateProbs(prob, i-1, currProbs, newProbs)
let calcRingProbs parsedArgs =
// Probs at S = 0.
// Make certain that we are positioned at only start location.
// e.g. P(Start Node) = 1
let startProbs =
Array.concat [ [| 1.0 |] ; [| for _ in 1 .. parsedArgs.nodes - 1 -> 0.0 |] ]
let endProbs =
List.fold (fun probs _ ->
calcStateProbs(parsedArgs.probability, probs.Length-1,
probs, Array.create probs.Length 0.0))
startProbs [1..parsedArgs.states]
endProbs
Here’s the code.
No promises this time but I may follow this sequential version up with a parallelized version.
I’ve been hearing more about Elixir lately so I thought I’d take it for a spin. “Elixir is a functional, meta-programming aware language built on top of the Erlang VM. It is a dynamic language that focuses on tooling to leverage Erlang’s abilities to build concurrent, distributed and fault-tolerant applications with hot code upgrades.”
I’ve never really spent any time with Erlang but always been curious about it and the fact that it’s one of the best kept ‘secrets’ in many startups these days. I’ve heard for years how easy it is to ‘scale out’ compared with many other languages and platforms.
Joe Armstrong, the creator of Erlang, wrote a post about Elixir in which he seemed to really like it except for some minor things. This got me even more curious so I decided to write some code that seemed like it could benefit from the features provided by Elixir for easily making suitable algorithms parallelizable.
Let’s talk about Ring probabilities. Let’s say we had a cluster of N nodes in a ring topology. We then might have some code that requires S steps to be run and each subsequent step is run on a node to the right of the previous node with some probability P.
In the initial state (S=0) the probability of some piece of code running on node A is P=1.
At the next step (S=1) the probability of the step running on a node to the right in the ring is P and the probability of the step running on a node to the left is 1-P.
Here is an example with some crude ascii diagrams to visually represent this :
It looks like it might be coming sooner than I thought. I’m sure Shenturions everywhere will find this news incredibly exciting for the future of Shen. I can’t wait to see how things progress.
In computer science, a Selection sort is a sorting algorithm, specifically an in-placecomparison sort. It has O(n2) time complexity, making it inefficient on large lists, and generally performs worse than the similar insertion sort. Selection sort is noted for its simplicity, and also has performance advantages over more complicated algorithms in certain situations, particularly where auxiliary memory is limited.
Kuksi is famous for his Baroque-influenced assemblages that seem to splinter out like sculptural fractals, each component fused to another object or detail even smaller and intricate
*Updated 08-23-2012 01:04:38* Replaced the use of Data.Vector with the persistent Data.Sequence which has O(logN) worst case time complexity on updates.
A Haskell version of the previous code using the more efficient(access and update) persistent Data.Sequence type so that the desired time complexity is maintained for the union operation.
-- Disjoint set data type (weighted and using path compression).
-- O((M+N)lg*N + 2MlogN) worst-case union time (practically O(1))
-- For M union operations on a set of N elements.
-- O((M+N)lg*N) worst-case find time (practically O(1))
-- For M connected(find) operations on a set of N elements.
data DisjointSet = DisjointSet
{ count :: Int, ids :: (Seq Int), sizes :: (Seq Int) }
deriving (Read, Show)
-- Return id of root object
findRoot :: DisjointSet -> Int -> Int
findRoot set p | p == parent = p
| otherwise = findRoot set parent
where
parent = index (ids set) (p - 1)
-- Are objects P and Q connected ?
connected :: DisjointSet -> Int -> Int -> Bool
connected set p q = (findRoot set p) == (findRoot set q)
-- Replace sets containing P and Q with their union
quickUnion :: DisjointSet -> Int -> Int -> DisjointSet
quickUnion set p q | i == j = set
| otherwise = DisjointSet cnt rids rsizes
where
(i, j) = (findRoot set p, findRoot set q)
(i1, j1) = (index (sizes set) (i - 1), index (sizes set) (j - 1))
(cnt, psmaller, size) = (count set - 1, i1 < j1, i1 + j1)
-- Always make smaller root point to the larger one
(rids, rsizes) = if psmaller
then (update (i - 1) j (ids set), update (j - 1) size (sizes set))
else (update (j - 1) i (ids set), update (i - 1) size (sizes set))
CODEBREAKER tells the story of one of the most important people of the 20th century. Alan Turing set in motion the computer age and his World War II codebreaking helped save two million lives. Yet few people have heard his name, know his tragic story, or understand his legacy. In 1954, Turing committed suicide at age 41 after being forced to undergo hormone therapy to “fix” his sexual orientation. He left behind a lasting legacy and lingering questions about what else he might have accomplished if society had embraced his unique genius instead of rejecting it. 100 years after his birth, an international production team takes viewers on a journey to rediscover the man and the mystery.
Richard Carrier puts forward a fantastic approach to verifying history in his latest book : Proving History: Bayes’s Theorem and the Quest for the Historical Jesus “… historian Richard C. Carrier proposes Bayes’s theorem as a solution to the problem of establishing reliable historical criteria. He demonstrates that valid historical methods—not only in the study of Christian origins but in any historical study—can be described by, and reduced to, the logic of Bayes’s theorem. Conversely, he argues that any method that cannot be reduced to Bayes’s theorem is invalid and should be abandoned.”
And in case you’re wondering, yes it’s Bayes’s Theorem and not Bayes’ Theorem.
Binstock: So you called them on the lying. Kay: Yeah. But the thing that traumatized me occurred a couple years later, when I found an old copy of Life magazine that had the Margaret Bourke-White photos from Buchenwald. This was in the 1940s — no TV, living on a farm. That’s when I realized that adults were dangerous. Like, really dangerous.
For the New Intellectual: The Philosophy of Ayn Rand (50th Anniversary Edition) – Ayn Rand This is Ayn Rand’s challenge to the prevalent philosophical doctrines of our time and the “atmosphere of guilt, of panic, of despair, of boredom, and of all-pervasive evasion” that they create. One of the most controversial figures on the intellectual scene, Ayn Rand was the proponent of a moral philosophy–and ethic of rational self-interest–that stands in sharp opposition to the ethics of altruism and self-sacrifice. The fundamentals of this morality–“a philosophy for living on Earth”–are here vibrantly set forth by the spokesman for a new class, For the New Intellectual.
Ted Nelson‘s latest video series called “Computers for Cynics” calls it like it is. How refreshing in contrast with the pervasive garbage called ‘information’ that we get fed in the second decade of the 21st century.
I laughed and wept at part 1 : The Myth of Technology
Quite timely considering the recent round of UI design atrocities spread under the mask of ‘good design’.
As if a bunch of over-payed(Windows), over-stroked(Gnome), over-controlling(Mac) middle-managed programmers can do what Steve Jobs did.
What’s my UI ? Not for the faint of heart : mostly emacs in a terminal window and stumpwm (only really possible in Linux). Related : The more idiot-proof the system, the more you will constrain behavior, forcing people to act like idiots, even when it’s against their better judgment …
At the swissnex San Francisco conference earlier this year, scientists from Switzerland and the US discussed their research on humanoid robots, cognitive robotics, and artificial intelligence (AI). Talk revolved around how some robots self-reflect, self-improve, and adapt to new circumstances, and whether it’s possible for robots of the future to possess the same cognitive characteristics as humans.
Chronicle has to be the best sci-fi movie I’ve seen in 2012.
Shot in Cape Town, South Africa on a budget of just $15 million. The end result is incredible including it’s box-office results.
Hope the sequel is just as good.
So I’ve switched this blog to WP for now. There are still some formatting issues here and there so please be patient while they are fixed.
Comments have been enabled for all posts retroactively.
Let’s see how this goes …
From the website: John was a legendary computer scientist at Stanford University who developed time-sharing, invented LISP, and founded the field of Artificial Intelligence.* In March 2011 John launched Project JMC with the objective to make his work more approachable and accessible. The Project JMC team is continuing to help realize his objective. In this site you will find all John’s work, including his social commentary, and acknowledgements of his outstanding contributions and impact. Additional comments, suggestions, stories, photographs and videos on John and his work are very welcome. Please send them to the Project JMC team. His old website is here.
A Shen type-checked implementation of Quick Sort is even more elegant/terse compared with the CL version posted previously.
Pattern-matching and currying make this possible.
(tc +)
(define filter
{(A --> boolean) --> (list A) --> (list A)}
_ [] -> []
T? [A | B] -> (append [A] (filter T? B)) where (T? A)
T? [_ | B] -> (filter T? B))
(define quick-sort-generic
{(list A) --> (A --> A --> boolean) --> (A --> A --> boolean) --> (list A)}
[] _ _ -> []
[A | B] L? R? -> (append (quick-sort-generic (filter (R? A) B) L? R?)
[A]
(quick-sort-generic (filter (L? A) B) L? R?)))
\* descending with duplicates *\
* (quick-sort-generic [3 1 2 7 9 6 6 3 0] >= <)
* [9 7 6 6 3 3 2 1 0] : (list number)
After watching some of Tim Roughgarden’s videos on sorting algorithms, I thought I’d post an implementation of quick sort in Common Lisp as an example of a sorting algorithm implemented in CL. It’s a simple enough example(at < 20 LOC) that demonstrates one non-imperative approach to algorithm implementation. The complete code can be found here.
(defun quick-sort-generic2 (sequence cfun &optional result-type)
(if (<= (length sequence) 1)
(copy-seq sequence)
(flet ((partition (fun array)
(list (remove-if-not fun array) (remove-if fun array))))
(let* ((result-type (or result-type 'vector))
(pivot-ind (random (length sequence)))
(pivot-val (elt sequence pivot-ind))
(rem-seq
(remove pivot-val sequence :start pivot-ind :end (+ 1 pivot-ind)))
(part (partition (lambda (x)
(apply cfun (list x pivot-val))) rem-seq)))
(concatenate result-type
(quick-sort-generic2 (car part) cfun result-type)
(list pivot-val)
(quick-sort-generic2 (cadr part) cfun result-type))))))
* (test-sort)
started quick-sort (generic, array) ...
Evaluation took:
0.089 seconds of real time
0.081912 seconds of total run time (0.081587 user, 0.000325 system)
92.13% CPU
142,664,472 processor cycles
8,375,024 bytes consed
quick-sorted 10000 items (first 10 shown) :
#(9998 9998 9998 9997 9997 9996 9995 9994 9993 9992)
started quick-sort (generic, list) ...
Evaluation took:
0.062 seconds of real time
0.058722 seconds of total run time (0.058417 user, 0.000305 system)
95.16% CPU
99,419,648 processor cycles
9,371,456 bytes consed
quick-sorted 10000 items (first 10 shown) :
(9999 9998 9997 9997 9996 9996 9994 9993 9993 9992)
How do robots see the world? How do they gather meaning from our streets, cities, media and from us?
This is an experiment in found machine-vision footage, exploring the aesthetics of the robot eye.
Cory Doctorow’s latest talk ‘The Coming War on General Purpose Computing’ really puts things in perspective about life in the 21st century.
This got me thinking more about functional programming languages and how they could be a future casualty in a similar way that we currently see the intentional limitation/crippling of turing machines.
What if Paul Graham missed something very dark in his essay The Hundred-Year Language ?
The essay does not mention the potential role that government, industry and academia might play in the development, legality and distortion of such a language. After all, the more power that a technology might put in the hands of an individual or group the more correlated is the increase in attempts to limit and control the technology.
The uber-powerful 100 year programming language might be heavily regulated(if it’s as powerful as Paul Graham thinks it could be). The same way that any powerful technology is.
That is if we, as a species, even make it through the next 100 years. What if everything is eventually regulated in society(in large part due to mass hysteria instead of critical thinking) ? What if one day in the future we need to have a license to browse the internet ? One thing is inevitible, that kind of situation never lasts very long, as history will attest. Severe changes in society and government occur. In East Germany or Communist Russia people started killing themselves. Life was that bad. Hell, those factory workers in China that make our iPads, iPods and video game consoles are a contemporary case of this. Suicide is the only leverage they have, who else will work the assembly lines 90-100 hours a week ?
Or in the sad case of ancient Rome it could easily be argued that the legal system played a fatal part in the self-inflicted collapse of a once great empire that could no longer stand under it’s own legislative weight. In the 21st century western world we aren’t seeing mass suicide but we have seen a massive increase in mental illness in this and the last century which could be directly correlated with the massive continual increase of legislation and the burden that it puts on the average person. There’s no excuse for being ignorant of the googolzillion number of laws that exist today and all that (unless ofcourse you are a giant corporation or you work on wall street in derivatives trading, it seems). This is further compounded by the wide-spread increase in psychiatric prescriptions that an increasing number of people are relying on just to live a modern life.
Does the inhumane treatment of Alan Turing(and his subsequent suicide to escape it) ominously predict a similar fate for one of his greatest creations ?
Sold out attendance marked this being the second premiere conference for Clojure. What I realized is just how great the people are in this community. No, seriously. There’s far less ego than with many other communities. This is a great sign for the future of Clojure. Particularly because Rich is such a great guy. He even gave a public apology for something or other that he said(on a mailing list) which he felt was not in the spirit of the Clojure community.
In this video Sam Aaron gives a fast-paced introduction to a number of key live programming techniques such as triggering instruments, scheduling future events and synth design. Finally, the viewer is shown how a simple musical sequence may be composed and then converted into an intricate Reich phase. The main body of the video was recorded in one take and features an Emacs buffer (using the Live Coding Config (is.gd/live_coding_emacs) for editing text and communicating with Overtone (is.gd/overtone), an expressive Clojure front-end to SuperCollider. Clojure is a state-of-the-art functional lisp emphasising immutability and concurrency (clojure.org).
The first publicly available version of Shen/Kl has been released.
The Shen mission is to develop an ultra-portable version of Qi that can run under a wide variety of platforms and which incorporates missing features in Qi such as streams. The targeted platforms are CL, Clojure, NewLisp, Emacs Lisp, ECL, Scheme, Javascript and Python as well as C/C++, DVM, CLR or the JVM. This approach involved rewriting and redesigning Qi to fit within the smallest feasible instruction set.
In 1972 Ed Catmull (founder of Pixar) and his colleagues created the world’s first 3D rendered movie, an animated version of Ed’s left hand. This is the film that they produced. It includes some “making of” footage (around 1:30) and some other early experiments.
Update : bug-fix when hull was being incorrectly calculated due to there being duplicate points generated in the random set.
ClojureScript looks like a solid approach to building applications that target JavaScript VMs. It’s built on top of Google’s Closure Compiler/Library which is very intruiging and is the best approach that they could have taken (now that I’ve a played with it a little). Being new to both Closure and ClojureScript I was curious about what it might feel like to build an application using these tools. I’ve mostly despised programming in JavaScript for browsers no matter what hyped-up library is available (this includes JQuery which is the best of a bad bunch in my opinion). So I decided to write a ClojureScript application that runs in the browser based on a previous Clojure implementation of a Convex Hull algorithm with heuristics.
In this second post in this series we look at an implementation of the always useful Fast Fourier Transform. (FFT) An algorithm for computing the Fourier transform of a set of discrete data values. Given a finite set of data points, for example a periodic sampling taken from a real-world signal, the FFT expresses the data in terms of its component frequencies. It also solves the essentially identical inverse problem of reconstructing a signal from the frequency data.
The FFT is a mainstay of numerical analysis. Gilbert Strang described it as “the most important algorithm of our generation”. The FFT also provides the asymptotically fastest known algorithm for multiplying two polynomials.
Our implementation comes in at just under 100 lines of code
This is the first in a series of posts that will demonstrate the implementation of many well-known(and less known) data structures and algorithms using a purely functional approach.
We will use Qi as our implementation language for a number of reasons :
It’s a Lisp : macros, EVAL, hash-tables, property-lists, meta-programming etc.
White paper : To a mathematical theory of evolution and biological creativity We present an information-theoretic analysis of Darwin’s theory of
evolution, modeled as a hill-climbing algorithm on a fitness landscape.
Our space of possible organisms consists of computer programs, which
are subjected to random mutations. We study the random walk of in-creasing
fitness made by a single mutating organism. In two different
models we are able to show that evolution will occur and to characterize
the rate of evolutionary progress, i.e., the rate of biological creativity For many years we have been disturbed by the fact that there is no fundamental
mathematical theory inspired by Darwin’s theory of evolution.
This is the fourth paper in a series attempting to create
such a theory.
In a previous paper we did not yet have a workable mathematical frame-work:
We were able to prove two not very impressive theorems, and then the
way forward was blocked. Now we have what appears to be a good mathematical
framework, and have been able to prove a number of theorems. Things
are starting to work, things are starting to get interesting, and there are many
technical questions, many open problems, to work on.
So this is a working paper, a progress report, intended to promote interest
in the field and get others to participate in the research. There is much to be
done.
Marvin Minksy’s – The Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the Future of the Human Mind Minsky argues that emotions are different ways to think that our mind uses to increase our intelligence. He challenges the distinction between emotions and other kinds of thinking. His main argument is that emotions are “ways to think” for different “problem types” that exist in the world. The brain has rule-based mechanism (selectors) that turns on emotions to deal with various problems. The book reviews the accomplishments of AI, what and why is complicated to accomplish in terms of modeling how human beings behave, how they think, how they experience struggles and pleasures. (Wikipedia) The Moral Landscape – Sam Harris In this explosive new book, Sam Harris tears down the wall between scientific facts and human values, arguing that most people are simply mistaken about the relationship between morality and the rest of human knowledge. Harris urges us to think about morality in terms of human and animal well-being, viewing the experiences of conscious creatures as peaks and valleys on a “moral landscape.” Because there are definite facts to be known about where we fall on this landscape, Harris foresees a time when science will no longer limit itself to merely describing what people do in the name of “morality”; in principle, science should be able to tell us what we ought to do to live the best lives possible.
Bringing a fresh perspective to age-old questions of right and wrong, and good and evil, Harris demonstrates that we already know enough about the human brain and its relationship to events in the world to say that there are right and wrong answers to the most pressing questions of human life. Because such answers exist, moral relativism is simply false—and comes at increasing cost to humanity. And the intrusions of religion into the sphere of human values can be finally repelled: for just as there is no such thing as Christian physics or Muslim algebra, there can be no Christian or Muslim morality.
Using his expertise in philosophy and neuroscience, along with his experience on the front lines of our “culture wars,” Harris delivers a game-changing book about the future of science and about the real basis of human cooperation. In the Plex: How Google Thinks, Works, and Shapes Our Lives – Steven Levy Few companies in history have ever been as successful and as admired as Google, the company that has transformed the Internet and become an indispensable part of our lives. How has Google done it? Veteran technology reporter Steven Levy was granted unprecedented access to the company, and in this revelatory book he takes readers inside Google headquarters—the Googleplex—to show how Google works.
While they were still students at Stanford, Google cofounders Larry Page and Sergey Brin revolutionized Internet search. They followed this brilliant innovation with another, as two of Google’s earliest employees found a way to do what no one else had: make billions of dollars from Internet advertising. With this cash cow (until Google’s IPO nobody other than Google management had any idea how lucrative the company’s ad business was), Google was able to expand dramatically and take on other transformative projects: more efficient data centers, open-source cell phones, free Internet video (YouTube), cloud computing, digitizing books, and much more.
The key to Google’s success in all these businesses, Levy reveals, is its engineering mind-set and adoption of such Internet values as speed, openness, experimentation, and risk taking. After its unapologetically elitist approach to hiring, Google pampers its engineers—free food and dry cleaning, on-site doctors and masseuses—and gives them all the resources they need to succeed. Even today, with a workforce of more than 23,000, Larry Page signs off on every hire.
But has Google lost its innovative edge? It stumbled badly in China—Levy discloses what went wrong and how Brin disagreed with his peers on the China strategy—and now with its newest initiative, social networking, Google is chasing a successful competitor for the first time. Some employees are leaving the company for smaller, nimbler start-ups. Can the company that famously decided not to be evil still compete?
No other book has ever turned Google inside out as Levy does with In the Plex.
Qi is the future of Lisp.
It is Lisp with many great features such as pattern-matching, a turing complete static type system (even more powerful than Haskell’s type system) and many others.
So in the spirit of PI day, here’s an implementation that calculates PI using Machin’s formula.
(define ceiling
X -> (CEILING X))
(declare ceiling [number --> number])
(define expt
X Y -> (EXPT X Y))
(declare expt [number --> number --> number])
(define floor
X Y -> (FLOOR X Y))
(declare floor [number --> number --> number])
(define log
X Y -> (LOG X Y))
(declare log [number --> number --> number])
(tc +)
(define arccot-
{number --> number --> number --> number --> number --> number}
X N XPOWER 0 _ -> 0
X N XPOWER TERM 1 -> (+ (arccot- X (+ N 2) (floor XPOWER X)
(floor XPOWER N) 0) (floor XPOWER N))
X N XPOWER TERM 0 -> (- (arccot- X (+ N 2) (floor XPOWER X)
(floor XPOWER N) 1) (floor XPOWER N)))
(define arccot
{number --> number --> number}
X UNITY -> (let XPOWER (floor (/ UNITY X) 1)
(arccot- (* X X) 1 XPOWER (floor XPOWER 1) 1)))
(define machin-pi
{number --> number}
DIGITS -> (let GUARD (+ 10 (ceiling (log DIGITS 10)))
UNITY (expt 10 (+ DIGITS GUARD))
(floor (* 4 (- (* 4 (arccot 5 UNITY))
(arccot 239 UNITY))) (expt 10 GUARD))))
Programming language wars don’t have to be religious based wars. Programming languages should be rooted in philosophy. The more a programming language is rooted in sound philosophy the more value it has.
Over the years, many of the posts on this blog have been regarding some programming language, algorithm or technology. Some posts have highlighted why Lisp is the most powerful and useful programming language paradigm available to man at this point in the history of computer science.
Explicitly pointing out examples of Lisp code is always insightful and important (at least to those open to evidence and reason).
Still there are people who cannot(or will not?) grasp just why Lisp is, has been(for the past half-century) and will be so important to the development and growth of computer science. For example, some people, in spite of having read Paul Graham’s clear essays on Lisp (which make it really easy to grasp why Lisp is important), still often seem to parrot incoherent illogical arguments and myths against Lisp.
My goal with many of the blog posts here have been my attempt to bring some understanding to folks interested in Lisp and computer science related topics that are based on integrity and therefore are of real value to those that pursue them.
Within computer science, academia and industry there are too many disparate choices presented to the various stake holders from the cubicle dwellers all the way up to the CEOs and Professors. The elephant in the room with all these choices(and what most of them have in common) is that they are lacking in integrity and value. Profit, control, ignorance, altruism, stupidity, inexperience, grant money, incapability, kick backs, bonuses, salaries, titles, fear and many other reasons explain why integrity and value are lacking.
The same has happened with all of the other sciences too and from a philosophical stand-point the causes are very much similar.
At this point some may ask why philosophy is even important to computer science and let alone a programming language called Lisp. The kind of person that usually asks this question is usually the kind of person that has never understood why philosophy itself is so important. Well, just how important is philosophy ? The short answer is that, after life itself, philosophy is the second most important thing to a human being.
It’s critical that after stating the high importance of philosophy that I quickly define what I mean by Philosopy. By philosophy I mean the study of the fundamental nature of knowledge, reality and existence using the tools of observation, evidence, empiricism, logic and reason. This is the classical philosophy of Aristotle and Socrates which is rational absolutism. It is NOT the charlatan ‘philosophy’ of mysticism, positivism, relativism, perspectivism, nihilism and altruism of Plato, Marx, Imannuel Kant, Kierkegaard, Hegel and so many others whose theories have tragically played out in human history and some of which unfortunately are still continually adhered to right up until now. They are more correctly called for what they are : ideologies or religions. Religion is irrational absolutism. The philosophy I am talking about is made distinct in that it is rational absolutism. It is therefore not for bar-room outbursts or musings between tenured professors in dusty old buildings. It is not the salesy popular positive-thinking conventions, the caffeine-overdose incoherent babbling at church or AA gatherings. It is not the foggy positive upbeat tangled ramblings of relativism at burning man. It is a study that has practical applications right from the start.
Without philosophy you would not be reading this blog post. You would not have a computer, there would be no internet. Society would not have produced books, hygiene would not exist, the enlightenment would never have happened. Mathematics and the sciences would never have advanced to where they are today and we would not be benefiting from them if it weren’t for philosophy. From the dysfunctional quirks to the atrocities perpetuated by conflict around this planet, which we all witness each day in society, the root cause is the problem of ideologies and religions usurping the rightful place of philosophy. Philosophy is a matter of life and death. Philosophy is as critical to ethics and morality as it is to mathematics and science. This has been conclusively proved from first principles and so I will not do it here. The human race has advanced in technology further than anyone could have imagined and yet we still resort to coercion and violence at all levels of society. This is because we have not based our ethics and morality on philosophy. Instead we have given these responsibilities of moral and ethical definition to authority figures : the government, industry, academia, the church and well-intentioned but dishonest and flawed parental coercive attempts that do incalculable damage to children and then play out in our societies through their adult life in crime and violence or if we’re lucky it’s merely benign stupidty and arrested personal development.
Well, that’s quite the detour but it’s important to highlight the importance of Philosophy.
Here I put forward that Lisp’s outstanding importance to computer science compared with other programming languages is based on it’s solid philosophical foundation. This is quite simple to prove and I will do so in a few paragraphs.
Lisp is based on Lambda Calculus. Lambda Calculus is a formal system for function definition, function application and recursion. Lisp’s contribution to programming language theory is unfortunately, for the most part, unrecognized by the majority of programmers today. For example: Lisp and typed lambda calculii serve as the foundation for modern type systems. On the other end there is no equivalent of Lisp’s concept of macros that exists in any other programming language even up until today. If there were then that programming language would be a Lisp implementation.
Let’s look further down at Lisp. I stated that Lisp is based on the formal system of lambda calculus. The formal system of lambda calculus is based on functions, logic and predicates, recursion and a number of other important concepts that are fundamental concepts in mathematics. It would follow that the greater the fidelity a programming language has to these mathematical concepts and the more it builds upon them then the more powerful the programming language will be. History provides the evidence in that there is no other programming language that has done this better than Lisp.
We could go even deeper and ask why these fundamental mathematical concepts are so crucial. The answer will then take us into philosophy upon which mathematics is based upon. Sound philosophy demanded that these mathematical concepts be tested by evidence, logic and rigor from some very basic premises that were built up to more complex and powerful structures of thought which were proved to be true. Metaphorically: mathematics and the sciences are trees which can only grow in the soil of philosophy. The reasons are plain as to why religion, superstition and mysticism are not responsible for putting man on the moon or leading to the discovery of DNA.
The scientific/mathematical side of Lisp is just half of the explanation though. The other half of Lisp is the ethical and moral side. Stay with me. Most programmers hardly ever associate a programming language with ethics and morality but they do play a role in the design and use of a language. This is because human beings must use these programming languages. They must fill their minds with the concepts and limitations that these programming languages require for their use and application. When a language designer decrees(when he should instead be deducting) that some of the features that were available to him in the design are too powerful for the users of his language then he is in the realm of morality and ethics and as such is subject to valid moral and ethical scrutiny, which are in turn based on rational and evidence based philopsophy. You may be a computer scientist but you are still to be held morally and ethically responsible for your creation and what it subjects the minds of your users to. On a daily basis and for the masses of programmers, their language is unfortunately seen as a religious preference. It is an ideology forced upon them by their indoctrination from their peer group in academia or industry. Many are not even aware that their every day programming language even matters. Most just don’t even care. They are unaware of how the languages that they use effects their ability to think and solve problems.
Lisp’s design was such that it considered the user of the language equally important as the designer of the language. This shows in that Lisp has compile time and run time macros which effectively allow the user to change the language itself at it’s most basic level if they so desire. Contrast this design with the dictatorial designs of popular languages in industry. On the other hand Common Lisp’s design takes the freedom of the user even more seriously being a multi-paradigm Lisp.
In conclusion I don’t want to suggest that everyone should be using Lisp against their will. That would run counter to the philosophy of Lisp. Lisp is not a religion in the way other programming languages are seen. The myth of the newbie-eating Lisp hacker is just that, a myth. Lisp is embraced by the minority just as the sciences are. It has been shown that Lisp is based on sound philosophical principles and that these have resulted in it being the most successful(not popular) programming language in history. It’s contribution to programming language theory is remarkable. It has also imparted enjoyment, programming power and cognitive freedom to it’s users like no other programming language has.
Keep Lisping !
Artificial Intelligence was born in Computer Science departments, and inherited their value sets including Correctness. This mindset, this necessity to be logical, provable, and correct has been a fatal roadblock for Artificial Intelligence since its inception. The world is Bizarre, and Logic can not describe it. Artificial Intuition will easily outperform Logic based Artificial Intelligence for almost any problem in a Bizarre problem domain. From the very beginning, Artificial Intelligence should have been a soft science.
As an example let’s look at an algorithm that’s fairly common : breadth first traversal of a binary tree. Also called level-order traversal.
Wikipedia: “In graph theory, breadth-first search (BFS) is a graph search algorithm that begins at the root node and explores all the neighboring nodes. Then for each of those nearest nodes, it explores their unexplored neighbor nodes, and so on, until it finds the goal.”
The implementers of existing Ruby VMs have gone the way of C, C++ and Java.
There is another possibility. Why not implement Ruby in Common Lisp ?
Ok, let’s take the first shot and implement a subset of the Ruby language. Just enough to run some simple contrived numeric benchmarks. The following pseudo BNF describing a minimal subset of Ruby should do (taken from the rubydocs here).
PROGRAM : COMPSTMT
COMPSTMT : STMT (TERM EXPR)* [TERM]
STMT : CALL
| EXPR
EXPR | ARG
CALL : FUNCTION
FUNCTION : OPERATION [`(' [CALL_ARGS] `)']
ARG : LHS `=' ARG
| ARG `..' ARG
| ARG `+' ARG
| ARG `-' ARG
| ARG `*' ARG
| PRIMARY
PRIMARY : LITERAL
| VARIABLE
| for BLOCK_VAR in EXPR DO
COMPSTMT
end
| def FNAME ARGDECL
COMPSTMT
end
LHS : VARIABLE
CALL_ARGS : ARGS
ARGS : ARG (`,' ARG)*
ARGDECL : `(' ARGLIST `)'
ARGLIST : IDENTIFIER(`,'IDENTIFIER)*
VARIABLE : VARNAME
| nil
LITERAL : numeric
| STRING
TERM : `;'
| `n'
FNAME : IDENTIFIER
OPERATION : IDENTIFIER
VARNAME : IDENTIFIER
STRING : `"' any_char* `"'
IDENTIFIER is the sequence of characters in the pattern of /[a-zA-Z_][a-zA-Z0-9_]*/.
Let’s write two functions in our new Ruby subset language. First a function to calculate fibonacci numbers.
These days there just can’t be enough said to counter the hype that comes with STM. The following paper is an eye opening read(it measures actual peformance of STM). Hopefully some of the STM faithful will re-consider their fanatic beliefs.
Software transactional memory: why is it only a research toy? “In this article, we explore the performance of a highly optimized STM and observe the overall performance of TM is much worse at low levels of parallelism, which is likely to limit the adoption of this programming paradigm.”
Update 2010-06-30 : So just over a day after I posted this entry Google announced that they have acquired ITA. Announcement
There was buzz back in April about Google possibly acquiring ITA Software.
A few days ago Dan posted that these were just rumors based on a single Bloomberg article.
Well it seems that it may be more than just a rumor : The Wall Street journal now has an article regarding the potential acquisition.
The silence seems quite confirming : “Google declined to comment. ITA didn’t respond to calls seeking comment.”.
Also fairly interesting is that ITA uses a very similar combination of languages to what Google does : Python, C++, Java. Would this factor in Google’s interest ?
Hopefully, if this does happen, the ITA Lisp hackers get a substantial financial reward for all their innovative work. Fingers crossed …
“Let over Lambda – 50 Years of Lisp” by Doug Hoyte
This one had been sitting on my bookshelf for almost a year. “Let Over Lambda is one of the most hardcore computer programming books out there. Starting with the fundamentals, it describes the most advanced features of the most advanced language: COMMON LISP. The point of this book is to expose you to ideas that you might otherwise never be exposed to.”
and “Macros are what make lisp the greatest programming language in the world.”
and “If you are looking for a dry coding manual that re-hashes common-sense techniques in whatever langue du jour, this book is not for you. This book is about pushing the boundaries of what we know about programming.” “Only the top percentile of programmers use Lisp and if you can understand this book you are in the top percentile of Lisp programmers.”
I’m sure that the author has received his fair share of derision for making statements like these and for his clear and well researched content showing how Lisp is the greatest programming language ever invented. It may come off as conceited to some, but he is right. He’s also in good company with the greatest computer scientists ever known who have also made similarly bold statements regarding Lisp. This style is lacking in most technical writing today, there has been too much coddling and dilution in programming texts over the last decade. The fact is that unless you have mastered this language then you are in no position to even begin disussing this topic. This book, much like any of the posts on this site, is not for convincing the majority of Blub programmers that they should be using Lisp but to encourage a few that have an appreciation for the world’s greatest programming language to look even deeper.
Here’s a quote from the first chapter for those that might have known for a long while that there is more to programming than the hype of formal object systems that we’ve all been subjected to for the past few decades : “Object systems are a formalisation of a subset of let and lambda combinations, sometimes with gimmicks like inheritance bolted on. Because of this, Lisp programmers often don’t think in terms of classes and objects. Let and lambda are fundamental; objects and classes are derivatives. As Steele says, the ‘object’ need not be a primitive notion in programming languages. Once assignable value cells and good old lambda expressions are available, object systems are, at best, occasionally useful abstractions and, at worst, special-case and redundant.”
Buy the book. Programming language choice matters. “Introduction to Metamathematics” by Stephen Cole Kleene
Not much more needs to be said about this one. “The Stuff of Thought: Language as a Window into Human Nature” by Steven Pinker
From the Washington Post : “Language comes so naturally to us that it’s easy to believe there’s some sort of intrinsic logic connecting the thing and its name, the signifier and the signified. In one of Plato’s dialogues, a character named Cratylus argues that “a power more than human gave things their first names.”
But Cratylus was wrong. Human language is an emanation of the human mind. A thing doesn’t care what we call it. Words and their rules don’t tell us about the world; they tell us about ourselves.
That’s the simple premise behind Steven Pinker’s latest work of popular science. According to the Harvard psychologist, people are “verbivores, a species that lives on words.” If you want to understand how the brain works, how it thinks about space and causation and time, how it processes emotions and engages in social interactions, then you need to plunge “down the rabbit hole” of language. The quirks of our sentences are merely a portal to the mind.
In The Stuff of Thought, Pinker pitches himself as the broker of a scientific compromise between “linguistic determinism” and “extreme nativism.” The linguistic determinists argue that language is a prison for thought. The words we know define our knowledge of the world. Because Eskimos have more nouns for snow, they are able to perceive distinctions in snow that English speakers cannot. While Pinker deftly discredits extreme versions of this hypothesis, he admits that “boring versions” of linguistic determinism are probably accurate. It shouldn’t be too surprising that our choice of words can frame events, or that our vocabulary reflects the kinds of things we encounter in our daily life. (Why do Eskimos have so many words for snow? Because they are always surrounded by snow.) The language we learn as children might not determine our thoughts, but it certainly influences them.
Extreme nativism, on the other hand, argues that all of our mental concepts — the 50,000 or so words in the typical vocabulary — are innate. We are born knowing about carburetors and doorknobs and iPods. This bizarre theory, most closely identified with the philosopher Jerry Fodor, begins with the assumption that the meaning of words cannot be dissected into more basic parts. A doorknob is a doorknob is a doorknob. It only takes Pinker a few pages to prove the obvious, which is that each word is not an indivisible unit. The mind isn’t a blank slate, but it isn’t an overstuffed filing cabinet either.
So what is Pinker’s solution? He advocates the middle ground of “conceptual semantics,” in which the meaning of our words depends on an underlying framework of basic cognitive concepts. (As Pinker admits, he owes a big debt to Kant.) The tenses of verbs, for example, are shaped by our innate sense of time. Nouns are constrained by our intuitive notions about matter, so that we naturally parcel things into two different categories, objects and substances (pebbles versus applesauce, for example, or, as Pinker puts it, “hunks and goo”). Each material category comes with a slightly different set of grammatical rules. By looking at language from the perspective of our thoughts, Pinker demonstrates that many seemingly arbitrary aspects of speech, like that hunk and goo distinction, aren’t arbitrary at all: They are byproducts of our evolved mental machinery.
Pinker tries hard to make this tour of linguistic theory as readable as possible. He uses the f-word to explore the topic of transitive and intransitive verbs. He clarifies indirect speech by examining a scene from “Tootsie,” and Lenny Bruce makes so many appearances that he should be granted a posthumous linguistic degree. But profanity from Lenny Bruce can’t always compensate for the cryptic vocabulary and long list of competing ‘isms. Sometimes, the payoff can be disappointing. After a long chapter on curse words — this book deserves an “explicit content” warning — Pinker ends with the banal conclusion that swearing is “connected with negative emotion.” I don’t need conceptual semantics to tell me that.
The Stuff of Thought concludes with an optimistic gloss on the power of language to lead us out of the Platonic cave, so that we can “transcend our cognitive and emotional limitations.” It’s a nice try at a happy ending, but I don’t buy it. The Stuff of Thought, after all, is really about the limits of language, the way our prose and poetry are bound by innate constraints we can’t even comprehend. Flaubert was right: “Language is a cracked kettle on which we beat out tunes for bears to dance to, while all the time we long to move the stars to pity.”
One popular interview question that never dies : write some code to reverse a singly linked-list.
Now understanding the problem for interviewees is one thing but getting it right in an imperative language seems to be quite a feat based on my experience as an interviewer. Most take 15-20 minutes to get this completely correct and sometimes even longer.
Here’s a quick solution that took me 10 minutes (in C) :
“If we make an unbiased examination of the accomplishments made by mathematicians to the real world of computer programming, we are forced to conclude that, so far, the theory has actually done more harm than good. There are numerous instances in which theoretical “advances” have actually stifled the development of computer programming, and in some cases they have even made it take several steps backward!”
Here’s a molecule viewer using OpenGL in Clojure. Implemented so that I could compare this type of development with Common Lisp (and also with using OpenGL development in C). OpenGL with Clojure can be really fun using Zach Tellman’s Penumbra. Coming at OpenGL from a more functional angle (e.g. not mutating global vars etc.) can be a little different but not difficult for anyone who has done enough functional programming. The code is here.
DISCLAIMER :
THIS IS A LANGUAGE RANT. DO NOT READ IF YOUR BPF* IS LESS THAN 0.37.
SIDE EFFECTS MAY INCLUDE SEVERE CONFUSION, UNJUSTIFIED ANGER, GASSY DISCHARGE,
INCOMPREHENSIBLE DIALOG, ERRATIC DEGRADATION OF CHARACTER AND POSSIBLY A
COMPLETE EMOTIONAL MELTDOWN.
Performance is one important aspect that people might want in a language. One language that continually surprises me in this regard is Haskell. Seriously, it’s simply amazing. But performance is only one important consideration when using a language. Abstraction is another.
What about more powerful levels of abstraction such as meta-programming ?
Haskell has very powerful functional idioms but it(and many other languages) are left at the door beyond which even more unfathomable worlds of expression are possible in the way of meta-programming. Unfortunately, as cool as Template Haskell is, it does not gain Haskell entry into the warp speed meta-programming universe. In Lisp there is no distinction between the syntax and the AST. In Template Haskell the AST is modeled using explicit data types. In Template Haskell, ordinary algebraic data types represent Haskell program fragments. These types modeled after Haskell language syntax and represents AST (abstract syntax tree) of corresponding Haskell code. There is an Exp type to represent Haskell expressions, Pat – for patterns, Lit – for literals, Dec – for declarations, Type – for data types and so on. You can see definitions of all these types in the module Language.Haskell.TH.Syntax.”.
Haskell is still a great language !
Beyond the door is another universe where Lisp macros make warp speed exploration possible. Lisp users travel this world freely and playfully looking for the next interesting practical joke to play on other language users(ofcourse that means writing more Lisp code !). Beware : travelling the universe at warp speed is quite dangerous, but probably not any more dangerous than driving your car. Can any other language pass through this door without embracing symbolic expressions (which make Lisp macros possible) ? Why not leave right now and find out ?
Seriously, I’ll be patiently waiting right here until you return. I’m not going anywhere.
Ah ! you’re back, great. Find anything ? Please email me ! Lisp does not need to prove anything.
The endless neophitic whining about Lisp not having a killer application is annoying. It’s annoying because we all know this is a very old joke. You do know it’s a very old joke don’t you ? There are many killer applications, they have existed for the past 50 years. One reason that they are likely unknown might be because they aren’t social networking sites or any of the other dehumanizing, reductive web 2.0 ilk that is so commonly marketed as ‘innovative’. In other words : certain persons might simply have tunnel vision or be plainly ignorant regarding Lisp’s so-called killer applications. Another reason is that the military, aerospace and other companies use Lisp but you and I will never get to see these applications(let alone the source code !). With Clojure becoming more popular this may all just change in the next few years : the web 2.0 places may start using the language. But really, any application written in Lisp is a killer application !
Something else to carefully note is how often Lisp’s most vehement opponents know the smallest amount of Lisp syntax and sadly they often know nothing more. Or they have enough of a superficial knowledge to argue with those that have been using it for decades. The ditto heads scream about there being too many parenthesis. Funny because Java code often has more parenthesis than Lisp, it’s just that they are in different places !
It’s my own fault, I sometimes try engaging certain superficial functional and Lisp crowds on the internet. Comments are quickly down-voted because of the prevalent mob mentality and inability for independent thought. Have we all not yet evolved beyond peer pressure ? Really ? Sometimes there are some gems when an independent thinker has something to add to the conversation instead of mob screech and splintered egos that ruin the dialog. I basically end up writing my own posts on my own site to summarize and lower the signal-to-noise ratio.
Not many comp-sci people would argue the value of studying boolean algebra, graph theory, algorithms or any of the other fundamental topics related to their field. Why the aversion to Lisp then ? Maybe it’s because some people(including computer scientists) have an aversion to Math ? I believe it’s because they have not taken the time to learn it. It’s not that Lisp is difficult to learn really. In fact newbie programmers find the syntax quite easy. What’s difficult is un-learning the imperative approach that they may have been used to for years. Computers are often treated as binary soup buckets ! Yes, binary soup buckets ! Stir, taste and then repeat the steps until the desired application flavor is acquired for whatever software broth it is that you might be cooking up. Scratch on the same location in memory until the desired outcome is obtained. This is the stuff of real nerd stand-up comedy folks. I could get into the Lisp machine rant but I’ll leave that for another time.
Even if a single useful Lisp application had still not been written yet, it would make no difference at all. Lisp stands on it’s own merits in just the same way that pure mathematics does.
In fact, Lisp basically IS Math : Lambda Calculus, Set Theory etc. but really an implementation of those ideas for a computer. This is one reason that certain mathematicians love Lisp. Interesting to note is that the first computers only existed in Math papers. For reasons why programming languages should be ‘mathy’ and why they are important to theorem proving, read Chaitin. Seriously go read anything by Chaitin right now.
If Lisp was a character from Star Trek it would be Q : despised and misunderstood by the many and loved by the few, which are usually savvy starship captains and smoking-hot babes of various intergalactic species. Women always seem to ‘get-it’ long before men do, why is that ? Something to do with their often higher BPF ?
Thanks for the applause, I’ll be here all week !
*BPF – Brain Plasticity Factor. Where IQ measures ‘intelligence’, BPF is a measure of a person’s ability to consider more than a single perspective. i.e. a flexible vector based intelligence as opposed to a more rigid scalar based intelligence. Low BPFs usually indicate a tendency toward mechanical thinking and often a pre-disposition to fundametal religious outbursts(sometimes fatal), in other words : they just aren’t fun to be around.
High BPFs on the other hand display far greater degrees of intelligence, actual wisdom(not only knowledge), an ability to think independently, increased happiness(not to be so quickly devalued – ask the Ruby guys), an incredibly high-tolerence for those with low BPFs, a certain way with the opposite(or same) sex(remember Einstein with the ladies !) and an all round appreciation for the uniqueness of the human being. Warning : these people can be really fun at parties !
* Update 2009-12-27 * Using lists instead of vectors brings the Clojure version down from around 20x to 10X slower than the CL version. One reader correctly stated that when doing performance comparisons between languages and implementations you want the code and data structures to be as identical as possible.
“A palindrome is a word, phrase, number or other sequence of units that can be read the same way in either direction (the adjustment of punctuation and spaces between words is generally permitted.”
The Idea Factory – Learning to Think at MIT, Pepper White :
“This is a personal story of the educational process at one of the world’s great technological universities. Pepper White entered MIT in 1981 and received his master’s degree in mechanical engineering in 1984. His account of his experiences, written in diary form, offers insight into graduate school life in general—including the loneliness and even desperation that can result from the intense pressure to succeed—and the purposes of engineering education in particular. The first professor White met at MIT told him that it did not really matter what he learned there, but that MIT would teach him how to think. This, then, is the story of how one student learned how to think.” Bio-Inspired Artificial Intelligence – Theories, Methods, and Technologies, Dario Floreano and Claudio Mattiussi :
“New approaches to artificial intelligence spring from the idea that intelligence emerges as much from cells, bodies, and societies as it does from evolution, development, and learning. Traditionally, artificial intelligence has been concerned with reproducing the abilities of human brains; newer approaches take inspiration from a wider range of biological structures that that are capable of autonomous self-organization. Examples of these new approaches include evolutionary computation and evolutionary electronics, artificial neural networks, immune systems, biorobotics, and swarm intelligence—to mention only a few. This book offers a comprehensive introduction to the emerging field of biologically inspired artificial intelligence that can be used as an upper-level text or as a reference for researchers.” On the Edge – The Spectacular Rise and Fall of Commodore, Brian Bagnall :
“Filled with first-hand accounts of ambition, greed, and inspired engineering, this history of the personal computer revolution takes readers inside the cutthroat world of Commodore. Before Apple, IBM, or Dell, Commodore was the first computer maker to market its machines to the public, eventually selling an estimated 22 million Commodore 64s. These halcyon days were tumultuous, however, owing to the expectations and unsparing tactics of founder Jack Tramiel. Engineers and managers share their experiences between 1976 and 1994 of the groundbreaking moments, soaring highs, and stunning employee turnover that came with being on top of the PC world in the early computer business.” The Road to Reality : A Complete Guide to the Laws of the Universe – Roger Penrose :
“At first, this hefty new tome from Oxford physicist Penrose (The Emperor’s NewMind) looks suspiciously like a textbook, complete with hundreds of diagrams and pages full of mathematical notation. On a closer reading, however, one discovers that the book is something entirely different and far more remarkable. Unlike a textbook, the purpose of which is purely to impart information, this volume is written to explore the beautiful and elegant connection between mathematics and the physical world. Penrose spends the first third of his book walking us through a seminar in high-level mathematics, but only so he can present modern physics on its own terms, without resorting to analogies or simplifications (as he explains in his preface, “in modern physics, one cannot avoid facing up to the subtleties of much sophisticated mathematics”). Those who work their way through these initial chapters will find themselves rewarded with a deep and sophisticated tour of the past and present of modern physics. Penrose transcends the constraints of the popular science genre with a unique combination of respect for the complexity of the material and respect for the abilities of his readers. This book sometimes begs comparison with Stephen Hawking’s A Brief History of Time, and while Penrose’s vibrantly challenging volume deserves similar success, it will also likely lie unfinished on as many bookshelves as Hawking’s. For those hardy readers willing to invest their time and mental energies, however, there are few books more deserving of the effort.” Born to Run – A Hidden Tribe, Super Athletes, and the Greatest Race the World Has Never Seen, Christopher McDougall :
“Full of incredible characters, amazing athletic achievements, cutting-edge science, and, most of all, pure inspiration, Born to Run is an epic adventure that began with one simple question: Why does my foot hurt? In search of an answer, Christopher McDougall sets off to find a tribe of the world’s greatest distance runners and learn their secrets, and in the process shows us that everything we thought we knew about running is wrong.”
Many learners will move into a variety of different, possibly unrelated fields over the course of their lifetime.
Informal learning is a significant aspect of our learning experience. Formal education no longer comprises the majority of our learning. Learning now occurs in a variety of ways – through communities of practice, personal networks, and through completion of work-related tasks.
Learning is a continual process, lasting for a lifetime. Learning and work related activities are no longer separate. In many situations, they are the same.
Technology is altering (rewiring) our brains. The tools we use define and shape our thinking.
The organization and the individual are both learning organisms. Increased attention to knowledge management highlights the need for a theory that attempts to explain the link between individual and organizational learning.
Many of the processes previously handled by learning theories (especially in cognitive information processing) can now be off-loaded to, or supported by, technology.
Know-how and know-what is being supplemented with know-where (the understanding of where to find knowledge needed).
...
(i) The point of solving problems is to understand mathematics better.
(ii) The point of understanding mathematics is to become better able to solve problems.
...
A Quine is a program that produces its complete source code as its only output.
Because Clojure is a Lisp, it is especially well suited to writing Quines. This is a quine implemented in Clojure.
((fn [x] (list x (list (quote quote) x))) (quote (fn [x] (list x (list (quote quote) x)))))
A few updates and bug fixes from the first version.
You can now play it here.
ACHTUNG ! : This requires unrestricted perms so you should probably just grab the code and compile it yourself.
* UPDATED : 2009-09-17 * – rewritten in a functional style that is more idiomatic of Clojure.
A nostalgic attempt to try and learn some GUI programming in Clojure. Inspired by Pong in Haskell.
Play online here.
The computer seems unbeatable(but it’s not thanks to a bug as things get faster).
Have fun !
< 200 lines of code :
;;;;
;;;; Pong!
;;;;
;;;; Justin Grant
;;;; 2009/09/12
(ns i27.games.pong
(:import (java.awt Color Toolkit Font GraphicsEnvironment Graphics2D)
(java.awt.image BufferStrategy)
(java.awt.event ActionListener MouseMotionListener KeyListener
MouseEvent KeyEvent)
(javax.swing JFrame Timer)))
(defstruct ball :h :w :x :y :sx :sy)
(defn new-ball [& [h w x y sx sy]] (atom (struct ball h w x y sx sy)))
(defn set-ball-size [b h w] (swap! b assoc :h h :w w))
(defn set-ball-speed [b sx sy] (swap! b assoc :sx sx :sy sy))
(defn set-ball-position [b x y] (swap! b assoc :x x :y y))
(defstruct paddle :h :w :x :y)
(defn new-paddle [& [h w x y]] (atom (struct paddle h w x y)))
(defn set-paddle-size [p h w] (swap! p assoc :h h :w w))
(defn set-paddle-position [p x y] (swap! p assoc :x x :y y))
(defstruct game :h :w :timer :score :started :my)
(defn new-game [& [h w timer score started my]]
(atom (struct game h w timer score started my)))
(defn set-game-size [g h w] (swap! g assoc :h h :w w))
(defn set-game-timer [g t] (swap! g assoc :timer t))
(defn set-game-score [g s] (swap! g assoc :score s))
(defn set-game-mouse-y [g my] (swap! g assoc :my my))
(defn stop-game [g]
(swap! g assoc :started false) (let [#^Timer t (@g :timer)] (.stop t)))
(defn start-game [g]
(swap! g assoc :started true) (let [#^Timer t (@g :timer)] (.start t)))
(defn reset-game [g b p c]
(set-ball-size b (* (@g :h) 0.0335) (* (@g :h) 0.0335))
(set-ball-position b
(- (/ (@g :w) 2) (/ (@b :w) 2))
(- (/ (@g :h) 2) (/ (@b :h) 2)))
(set-ball-speed b 15 15)
(set-paddle-size p (* (@b :h) 5) (@b :w))
(set-paddle-position p 35 (- (/ (@g :h) 2) (/ (@p :h) 2)))
(set-paddle-size c (@p :h) (@p :w))
(set-paddle-position c (- (@g :w) (@p :x) (@p :w)) (@p :y))
(set-game-score g 0))
(defn pong-frame [g b p c f1 f2]
(proxy [JFrame ActionListener MouseMotionListener KeyListener] []
(paint [grf]
(let [#^JFrame me this
#^BufferStrategy bs (.getBufferStrategy me)
#^Graphics2D gr (if (not= nil bs) (. bs getDrawGraphics) nil)]
(if (not= nil gr)
(do
(.setColor gr Color/BLACK)
(.fillRect gr 0 0 (@g :w) (@g :h))
(.setColor gr Color/WHITE)
(.setFont gr f1)
(.drawString gr (str "SCORE " (@g :score)) 20 20)
(.fillRect gr (@p :x) (@p :y) (@p :w) (@p :h))
(.fillRect gr (@c :x) (@c :y) (@c :w) (@c :h))
(if (@g :started)
(.fillRect gr (@b :x) (@b :y) (@b :w) (@b :h))
(do
(.setFont gr f2)
(.drawString gr "PONG!"
(- (/ (@g :w) 2) 46) (- (/ (@g :h) 2) 16))
(.setFont gr f1)
(.drawString gr "PRESS 'S' TO START, 'Q' TO QUIT"
(- (/ (@g :w) 2) 200) (+ (/ (@g :h) 2) 30))))
(. gr dispose)
(. bs show)))))
(mouseMoved [#^MouseEvent e]
(set-game-mouse-y g (.getY e))
(if (> (+ (@g :my) (/ (@p :h) 2)) (@g :h))
(set-game-mouse-y g (- (@g :h) (/ (@p :h) 2))))
(if (< (@g :my) (/ (@p :h) 2))
(set-game-mouse-y g (/ (@p :h) 2)))
(set-paddle-position p (@p :x) (- (@g :my) (/ (@p :h) 2)))
(let [#^JFrame me this] (.repaint me)))
(mouseDragged [e])
(keyPressed [#^KeyEvent e]
(when (and (not (@g :started)) (= (. e getKeyChar) s))
(reset-game g b p c) (start-game g))
(when (= (. e getKeyChar) q) (System/exit 0)))
(keyReleased [e])
(keyTyped [e])
(actionPerformed [e]
;; update ball position
(set-ball-position
b (+ (@b :x) (@b :sx)) (+ (@b :y) (@b :sy)))
;; update ball y direction
(when (or (= (+ (@b :y) (@b :h)) (@g :h)))
(set-ball-speed b (@b :sx) (* -1 (@b :sy))))
;; check if player returns ball
(when (and (= (+ (@b :y) (@b :h)) (@p :y))
( (@b :x) (@p :x)))
(set-ball-speed b (* -1 (@b :sx)) (@b :sy))
(set-game-score g (inc (@g :score)))
(set-ball-speed b (+ 1 (@b :sx)) (@b :sy))) ; game gets faster
;; check when computer returns ball
(when (and (>= (+ (@b :x) (@b :w)) (@c :x))
(>= (+ (@b :y) (@b :h)) (@c :y))
( (+ (@c :y) (/ (@p :h) 2)) (/ (@g :h) 2))
(set-paddle-position
c (@c :x) (- (@c :y) (* -1 (@b :sx))))
(set-paddle-position
c (@c :x) (+ (@c :y) (* -1 (@b :sx))))))
(if ( (+ (@c :y) (@p :h)) (@g :h))
(set-paddle-position c (@c :x) (- (@g :h) (@p :h))))
;; check game over
(when (or (< (+ (@b :x) (@b :w)) 0)
(> (+ (@b :x) (@b :w)) (@g :w)))
(set-paddle-position p (@p :x)
(- (/ (@g :h) 2) (/ (@p :h) 2)))
(stop-game g))
(let [#^JFrame me this]
(.repaint me)))))
(defn -main []
(let [tk (. Toolkit getDefaultToolkit)
ge (GraphicsEnvironment/getLocalGraphicsEnvironment)
gd (. ge getDefaultScreenDevice)
thegame (new-game (.. tk getScreenSize height)
(.. tk getScreenSize width))
theball (new-ball)
theplayer (new-paddle)
thecomputer (new-paddle)
#^JFrame screen (pong-frame
thegame theball theplayer thecomputer
(Font. "Courier New" Font/BOLD 24)
(Font. "Courier New" Font/BOLD 44))]
(set-game-timer thegame (Timer. 20 screen))
(if (not (. screen isDisplayable)) (. screen setUndecorated true))
(.setVisible screen true)
(. (.getContentPane screen) setBackground Color/BLACK)
(. (.getContentPane screen) setIgnoreRepaint true)
(doto screen
(.setResizable false)
(.setBackground Color/BLACK) (.setIgnoreRepaint true)
(.addMouseMotionListener screen) (.addKeyListener screen))
(. gd setFullScreenWindow screen)
(. screen createBufferStrategy 2)
(reset-game thegame theball theplayer thecomputer)))
(-main)
Took a shot at implementing a PI digit generator in Clojure using a ‘fast’ algorithm.
It seemed like a decent enough excercise to try and understand something about performance in Clojure.
MacBook Pro – Intel Core 2 Duo 2.26 GHz – 4GB RAM
Java(TM) SE Runtime Environment (build 1.6.0_15-b03-219)
Java HotSpot(TM) 64-Bit Server VM (build 14.1-b02-90, mixed mode)
Clojure 1.1.0-alpha-SNAPSHOT (Aug 20 2009) git commit f1f5ad40984d46bdc314090552b76471ee2b8d01
The previous Common Lisp and Haskell functions to generate the digits of PI where only accurate between 10000 and 20000 digits. The algorithm uses an approximation where we discard a certain number of ‘guard’ digits to get an accurate result. Some background regarding how the number of guard digits is calculated : There is ‘wobble’ in the number of contaminated digits depending on the number of input digits. When only the order of magnitude of rounding error is of interest, ulps(units in the last place) and ε (machine precision) may be used interchangeably, since they differ by at most a factor of β (the ‘wobble’ in the number of contaminated digits). For example, when a floating-point number is in error by n ulps, that means that the number of contaminated digits is about logβn. [1] [2]
Here are new versions of the functions using a guard digits calculation :
Common Lisp :
Here’s an implementation of Emacs Lisp’s MAPCONCAT function for Common Lisp.
(defun mapconcat (fun list sep)
(when list
(let ((~sep (with-output-to-string (*standard-output*)
(map nil (lambda (ch) (princ (if (char= #~ ch) "~~" ch))) sep))))
(format nil (format nil "~~A~~{~A~~A~~}" ~sep)
(funcall fun (first list))
(mapcar fun (rest list))))))
timed :
* (time (mapconcat 'identity *mylist* "-"))
Evaluation took:
2.805 seconds of real time
2.746358 seconds of total run time (2.736834 user, 0.009524 system)
[ Run times consist of 0.004 seconds GC time, and 2.743 seconds
non-GC time. ]
97.90% CPU
6,324,642,149 processor cycles
17,734,520 bytes consed
"0-1-2-3-4-5-6-7-8-9-10- ... "
The last article ended with a challenge hoping that someone would
come up with an improved version of the boggle solver that beats my version.
Well someone did, they also proved that with this particular algorithm
the Lisp version can match C within about 20%. Very good !
Here are the new graphs (note that the first is no longer on a logarithmic scale).
The patches have been merged into the boggle repo.
The previous article had some serious traffic yesterday and today.
Due to abuse of the moderation system on some sites I’ve been forced to present my response here separately.
To the SBCL contributor : Relax, SBCL is still a great Lisp compiler. It always has been and I’m sure it always will be. Your work on the compiler is appreciated. It’s unfortunate that you seem to fit the pontificating arm-chair theoretician stereo-type. You do have a few valid points to share in spite of your mostly incorrect presumptions regarding the Lisp code. The single worthwhile and correct contribution that you made to the discussion was your suggestion on type declarations in the critical path of the code. Thanks for that, you were one of only two people that has actually addressed the performance of the code so far. I posed a challenge to you because of your claims of being able to easily bring the code within under 3x the slowness of the C version. Your dismissive refusal to substantiate your claims with any real evidence speaks volumes to a wide ranging audience now.
The challenge to you stands indefinitely …
The myths and legends precede the language. Lisp.
(The same holds true for its dialects and derivatives.)
Its ‘bizarre’ syntax, its alleged ability to stand up to low level language performance, the unique and powerful abstractions awarded the programmer who even dares to attempt its almost vertical cliff face of learning. Or so it seemed back then. With its more recent popularity there’s been much hype surrounding its capabilities and especially so in contrast to other languages.
**********
For those that are unhappy about this not being a Symbolic Integration example then please read Norvig’s “Paradigms of Artificial Intelligence Programming – Case Studies in Common Lisp”, Chapter 8 – Symbolic Mathematics : A Simplification Program, 8.6 – Integration, page 252.
(defparameter *small-dx* 0.0001)
(defun integrate(f a b)
"integrate a function f from a to b"
(defun integrate-gen(f x b dx sum)
"the generalized inner function for tail-recursion.
integrate a function f from x to b"
(if (> x b)
sum
(integrate-gen f (+ x dx) b dx (+ sum (* (funcall f x) dx)))))
(integrate-gen f a b *small-dx* 0))
(let
((f1 (lambda (x) 1))
(f2 (lambda (x) (expt x 2)))
(f3 (lambda (x) (* x (exp (expt x 2))))))
(print (integrate f1 0 5))
(print (integrate f3 0 1)))
Andrew Sorenson is doing some beautiful live “Lisp coding as art” with synthesized sound and OpenGL.
Give the video some time to start up or skip ahead (at least 3 minutes).
Before I go any further : let me state upfront that knowing Lisp has changed me as a programmer for the best. It’s been an invaluable experience that I’ve carried into every corner of my career and every other language that I’ve coded with.
Now that we have that out of the way let’s move on…
There are a number of problems with Common Lisp that go unmentioned in the community. The fact that a prominent member of the CL community has recently highlighted these(at ILC 2009 no less) could be viewed as a break-through for the language in itself. The show-stoppers in my opinion from personal experience and which I’ve faced in trying to write code that performs acceptably in time and space :
Ron Garret puts it’s perfectly : “Whatever problems I may have, an unwillingness to learn new things is not one of them. I love to learn new things. That’s one of the reasons I hate Java, because learning Java didn’t teach me anything except how truly brain-damaged a language can be. (I’ve never learned Perl, but I’ve never learned how to run the deep fryer at a McDonalds either. I like learning new things, but life is short and there are some things I’m content not to know.)”
Brave New World – Aldous Huxley : Re-reading but not from any paradise-engineering paranoia perspective. Simply insightful, focused and a great style. First covered this in high school decades ago. It’s ironic and deviously cool that this was actually course mandated at my high school ! (Have yet to finish Island.) Pugetopolis – Knute Berger : The Seattle analog of Palahniuk’s A Walk in Portland, Oregon.
Useful because apparently Seattle is one of America’s most difficult cities to grok. Aesthetic Computing (MIT Press) – Paul A. Fishwick : “… key scholars and practitioners from art, design, computer science, and mathematics lay the foundations for a discipline that applies the theory and practice of art to computing. Aesthetic computing explores the way art and aesthetics can play a role in different areas of computer science. One of its goals is to modify computer science by the application of the wide range of definitions and categories normally associated with making art. …”
In the domain of Gabriel’s Patterns of Software: Tales from the Software Community (scroll down after link).
More info here with pictures.
See lispy’s insightful coverage on the talks if you couldn’t make it.
One of the highlights was getting to sign a copy of Lisp 1.5 Programmer’s Manual that was then gifted to John McCarthy after the event !
Reading Robert Cialdini’s “Influence” –
Plenty of social psychology observations that should be common sense if you’re paying attention to the world around you. Read it on recommendation thanks to Scott Adams. Neil Gaiman’s “American Gods” –
Yea yea, it’s nothing new but got through it last year and realize why his books morph fairly quickly into Hollywood scripts. Not much else to say about it. “The True Believer” by Eric Hoffer –
Although written half a century ago it’s incredibly contemporary in many ways. More social psychology but from a mass movement perspective. Well written in practical style and very distilled ideas. Some paragraphs are worth more than a few hours of thought. Frank Herbet’s “Dune” –
In Science Fiction what else come’s close ?
Listening Trentmoller –
‘The Last Resort’ and ‘The Trentmoller Chronicles’ are simply great albums. Wonderfully lacking in exploratory fear of the electronic genre. Miss Kittin – Batbox
New directions here on her second album. Diving a little deeper and darker. So worthwhile. Dust Galaxy – Dust Galaxy
Thievery Corp Garza’s rock side project. Still electronic but psychedelic and even grungy and dub.
If anyone wants to test Firefox 3.0 BETA 2 on FreeBSD then then here’s a static build. (I’ve only tried it on 6.2 let me know if you have a different version)
I couldn’t find a 3.0b2 build for FreeBSD on the Mozilla site. Pretty normal though I’m guessing most FreeBSD folks just wait for newer versions to show up in Ports.
A movie with a cast including Ray Kurzweil(as himself), Marvin Minsky, self-help guru Anthony Robbins, Alvin Toffler, Mitch Kapor, Bill Joy, Aubrey de Gray and more.
… the commercial software community has developed one particular response to resource limitation: a fevered, workaholic approach to software development—error prone, hectic, family-destroying, health-degrading, night-haunting. If you are undermanned by a factor of 2, add a second 8-hour workday per physical day. If you are operating under a schedule 50% too short, add in another 32 hours per week by working weekends. Then pray for luck or push back on features and quality.
Some Haskell fans seem impressed with better performance for a fibonacci function compared with similar implementations in Ruby and Python. They should be. I may be turning into a Haskell fan myself actually. Here’s why …
Read this and this before going on.
Impressive. Yea I thought so too until I implemented the same fib function in Lisp and ran it on SBCL.
Here it is…
(defun print-fib-trec-naive (start end)
(format t "n=~D => ~D~%" start (fib-trec start))
(if (< start end)
(print-fib-trec-naive (+ start 1) end))))
(defun fib-trec-naive (n)
"Naive tail-recursive Fibonacci number function"
(if (or (= n 0) (= n 1))
n
(+ (fib-trec-naive (- n 1)) (fib-trec-naive (- n 2)))))
According to this telegraph article : “an Ipswich-based businessman invented a £190 bottle that makes foul-smelling water drinkable in seconds.”
That’s about US$380 over here. Each bottle can filter 4000 or 6000 liters without changing the filter.
Also the bottle can make fecal matter safe to drink because it filters anything longer than 15 nanometres. This means that bacteria which are smallest at 200 nanometres(filterable by conventional water filters) and even viruses which are usually smallest at 25 nanometers can be filtered.
I’ve ranted previously about Python and made promises of future posts about why Ruby is so much cooler. It really is and the topic of this article covers one of the common idioms used by most Ruby folks who may have taken a shot at their own domain specific language in Ruby. The following is one of the really useful idioms but apparently not a very well understood or well explained one for Ruby coders.
The previous post was pretty grumpy about Python/Django and I was going to write a follow-up about why Ruby is a better language (unfortunately not the platform’s performance just yet but that is soon to change with YARV).
For this post to appear more fair about Ruby/Python than the last let’s look at one reason where these don’t come close to Lisp. This is something mentioned less often(but still important) when comparing Lisp with other languages which usually focus on macros and s-exprs.
The Regressive Imagery Dictionary (RID) is a coding scheme for text analysis that is designed to measure ‘primordial’ and conceptual content. Primordial thought is the kind of free-form, associative thinking involved in fantasy and dreams. Like Freud’s id, I guess. Conceptual (or secondary) thought is logical, reality-based and focused on problem solving.
RID contains about 3,000 words grouped into categories that are themselves classified as primary, secondary, and emotional. A piece of text is classified by what percentage of its words fall into each category.
‘Being John Malkovich’ is a 1999 film written by Charlie Kaufman and directed by Spike Jonez.
The movie stars John Cusack, John Malkovich, Cameron Diaz, Catherine Keener, Charlie Sheen, Orson Bean and Mary K. Place.
warning – mini spoiler below *
The film centers around Craig Schwarz(John Cusack) who finds a portal into John Malkovich’s subconscious. He exploits this to win the affection of Maxine(Catherine Keener) who he is in love with but has no interest in Craig.
‘A train wreck is a project where everyone assumes things are going well, progress is being made, and it appears that everything will turn out well. In reality, the train is barreling down the tracks at good speed until it hits the bridge that is out and then crashes spectacularly into the valley. No one is actually driving the train and all anyone sees is out the side windows, so the illusion of progress is great.’
“THE DALLES, Ore., June 8 — On the banks of the windswept Columbia River, Google is working on a secret weapon in its quest to dominate the next generation of Internet computing. But it is hard to keep a secret when it is a computing center as big as two football fields, with twin cooling plants protruding four stories into the sky.”
(This has been around for a while now but it still deserves more linkage.)
Anyone who has ever learned to program in Lisp will tell you it is very different from any other programming language. It is different in lots of surprising ways- This comic book will let you find out how Lisp’s unique design makes it so powerful!
Artificial Intelligence An early look at artificial Intelligence. Guests includes Edward Feigenbaum of Stanford University, Nils Nilsson of the AI Center at SRI International, Tom Kehler of Intellegenetics, Herb Lechner of SRI, and John McCarthy of Stanford. Featured demonstrations include Inferential Knowledge Engineering and the programming language LISP. Originally broadcast in 1984.
AND
Daniel G Bobrow: Common LISP Object Standard (1987)
Regarding the role of Lisp and game development (specifically on the PS2), Andy Gavin has this to say…
“Lisp was just the best solution for this job,” comments Gavin. “With leading edge game systems like ours, you have to deal with complicated behaviors and real-time action. Languages like C are very poor with temporal constructs. C is just very awkward for a project like this. Lisp, on the other hand, is ideal.”
“To me the most demoralizing aspect of the traditional office is that you’re supposed to be there at certain times. There are usually a few people in a company who really have to, but the reason most employees work fixed hours is that the company can’t measure their productivity.
The basic idea behind office hours is that if you can’t make people work, you can at least prevent them from having fun. If employees have to be in the building a certain number of hours a day, and are forbidden to do non-work things while there, then they must be working. In theory. In practice they spend a lot of their time in a no-man’s land, where they’re neither working nor having fun.