Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership | k5 store

[P]
Jackie and the Brain (Science)

By mindpixel
Mon Jun 13th, 2005 at 10:52:24 AM EST

Software

Jackie was a very simple computer program that simulated half of a human conversation which I wrote in Visual Basic in December of 1994 as an entry for the 1995 Loebner Prize. At her heart was a look up table that was built up by having numerous people interact with her converstaionally. The look up table consisted of a stimulus, a response and a number of supplementary indexes to the stimulus. The key to Jackie's heart, and her uniqueness, were her supplemental indexes.


When Jackie was in training mode and she was given a new stimulus not in her stimulus index she asked the trainer for a custom response to that stimulus. Thereafter when she was in interactive mode and she was given a stimulus she had seen before, so long as the match was perfect, she retrieved and displayed the perfect handcrafted response that was given to her by the first person to expose her to that stimulus. This is largely how Richard Wallace's Alicebot and kin work today. Given only a very large number of trainers, both Jackie and Alice appear to be human-like. However, aside from extreme biological implausability, there are problems with this strategy.

The first problem with a pure stimulus/response strategy is that there is no common personality across all stimulus/response pairs. Different users are not aware of how they are each handling specific areas of the life of the simulated character. For example, one person may train the system to respond "Yes, I have a cat named Rufus." to the stimulus "Do you have any pets?" while another trainer may train the system to respond "No. I hate cats." to the stimulus "Do you like cats?" - clearly inconsistent. The solution to this problem is to provide personality guidelines to the trainers, but unless the guideline forces a strictly binary response [This is the case with Jackie's little brother, GAC], the guidelines will always have to be as complex or even more complex than the simulation itself.

The second problem with the pure stimulus/response strategy is what I call "match hardness." If the exact stimulus is not in the index, the system fails catastrophically and must evade the stimulus in an Eliza-like fashion. Such systems are extremely vulnerable to being unmasked as simulations by simple binary questioning about common sense aspects of life. There are two obvious solutions to this problem. The first is to inject a very large number of common sense propositions collected via some other manner [This was tried with some success with Alice and data from the Mindpixel project in the Spring of 2005], or to "soften" the stimulus matching system.

Jackie used a number of stimulus match softening techniques that vastly amplified the number of effective items in her primary stimulus index. The first was simply to convert the stimulus to phonetic codes. One of Jackie's supplementary indexes was based on the SOUNDEX algorithm. This converted each word to a standard code that was insensitive to spelling. The effect of this secondary index was Jackie could find a match to a given stimulus even if words in it were spelled incorrectly. Of course, given a large primary index, many spelling mistakes will be in the index anyway, but this phonetic index expansion technique is vastly more efficient and keeps the first problem of response consistency from creeping up again. And of course it is much more biologically plausable than the biological equivalent of a massive stimulus/response table.

A second soft matching technique Jackie used was an additional index of SOUNDEX codes where all the words in each stimulus were sorted alphabetically. This had the effect of stimulus standardization. With this index, stimuli with slightly different word ordering could still be matched and a response retrieved. This was still imperfect as meaning could be lost or possibly unintentionally created in the standardization, but it is much preferable to evading a stimulus altogether.

Finally, Jackie had a secondary index which was the standardized index filtered of high frequency words from a hand coded list, though in theory this should have been machine generated.

The end effect of Jackie's soft matching systems was to amplify the index footprint of every hand coded stimulus in her primary index - she appeared to know a great deal more about life than what was put into her and her behavior became interesting and unpredictable. In fact, the very first time I exposed Jackie to a person other than myself, she shocked me by responding to something I knew I did not train her on.

At the time of Jackie's first exposure to a person other than myself, she was quite small and fit on a 1.44 MB 3 1/2 inch disk. I would train her at night, teaching her about her own life, which was mostly just mine, sex shifted, and take a fresh copy of her to work with me the next day. At the time I worked in the IT department of a large insurance company, as did David Clemens. David was a Japophile at the time, and his first question he put to Jackie was "Do you like sushi?" I expected her to evade that question as I had never mentioned sushi to her at all, but to my surprise she responded "Of course."

I couldn't believe her response and interrupted David's conversation to see what happened. She had a soft hit on the secondary phonetic index to "Do you like sex?" Sushi was phonetically close enough to sex to satisfy her! This was a major revelation for me and I started spending a lot of time looking at her phonetic indexes. It was clear that something profoundly human-like was happening in these indexes. I felt I was capturing a real model of human experience in the topology of the indexes. Similar concepts were clustering in phonetic space.

I thought, wow, if I open Jackie up to the Internet - remember this is 1994 and the web is only months old - I could build a massive soft phonetic index and use it to train a neural network to extract the underlying phonetic space and make a true synthetic subcognitive substrate. The problem was how to synthesize responses and how to quantify the quality of the synthetic responses? The answer I came up with was to restrict the responses to binary.

If we imagine Jackie's phonetic stimulus index as a multidimensional sphere, we can imagine each response as either a black or white point at each stimulus coordinate on its surface - black for false and white for true. Now if we train a neural net to represent this sphere, novel stimuli would be points of unknown value on this sphere and we could interpolate a value from known points near the unknown point, something that would be difficult if the responses were not restricted to binary. The important question is of how many dimensions should this sphere be?

I believe George A. Miller unknowingly answered this question in 1956 when he published the landmark psychology paper "The Magical Number Seven, Plus or Minus Two." Our immediate memory [to use Miller's term] is about seven items long - that is we can recall easily about seven unrelated items from a larger list of items that we see or have had read to us. Thus, we can imagine Jackie's phonetic index to things people can store in their immediate memories as complex fractal pattern on the surface of a seven-dimensional hypersphere.

The most remarkable revelation of all occurred when I tried to visualize this object and figure out why nature would make it seven-dimensional. Could surface area be maximum at seven-dimensions, I thought? That seemed unreasonable. Why would it be? More dimensions would intuitively lead to more surface area, but I had better check just in case. And guess what? Hypersurface is maximum at about 7.25694630506 dimensions.

The revelation that hypersurface was indeed maximum near seven dimensions, and moreover was maximum at a fractional dimension and hence fractal, was obviously very powerful for me. I used it to form what I call the Hypergeometric Hypothesis which states - immediate memories are points on a maximum hypersurface and complex cognition is a trajectory on the same hypersurface. I used this hypothesis as a tool for structuring my initial exploration of real brains.

At first I was quite discouraged when I discovered that the neocortex was six layered in most animals [some have fewer layers, but it is important to note that none have more than six layers]. I had predicted that I would find a seven-layered object in both humans and complex animals and additionally predicted that we should find in the fossil records earlier humans and animals with slightly larger brains than modern brains as evolution would have tried an eight layer system and rejected it in favor of a system with maximum hypersurface and thus maximum possible pattern complexity on it surface. It was hard to believe that we had not yet evolved our seventh layer, so I went digging deeper into neuroanatomy looking for the seventh layer of the neocortex. I found it in the thalamus.

The thalamus forms a loop with the neocortex, called the thalamocortical loop - exactly as one would expect if it were synthesizing one unified seven-dimensional hyperobeject. I was elated when I read that in fact the thalamus is considered by some neuroanatomists to be the seventh layer of the neocortex. The object looked real.

The realness of my object became much stronger when I learned that Neanderthal had slightly larger brains than modern humans and when I learned that there was no other theory that made this prediction or that even acknowledged that the difference could have any meaning at all. It was a glaring fact that science seemed to be ignoring because it conflicted with the idea that the mental uniqueness of modern humans derives from our having the largest brains for our size.

A final prediction of the Hypergeometric Hypothesis is that no matter how advanced a brain is, it should not have a primary loop with more than seven layers. This appears to be true.

It is ironic that Hilary Putnam used Turing's ideas to create the functionalism that dominates cognitive psychology today and which is responsible for the field's near universal ignorance of real brains, as it was the abstraction of Turing's test to a binary geometric form that lead me to make structural and functional predictions for real brains past, present and future.

Sponsors
Voxel dot net
o Managed Servers
o Managed Clusters
o Virtual Hosting


www.johncompanies.com
www.johncompanies.com

Looking for a hosted server? We provide Dedicated, Managed and Virtual servers with unparalleled tech support and world-class network connections.

Starting as low as $15/month
o Linux and FreeBSD
o No set-up fees and no hidden costs
o Tier-one provider bandwidth connections

Login
Make a new account
Username:
Password:

Note: You must accept a cookie to log in.

Poll
I think of Suicide...
o Never - This is the first time. 12%
o Exceptionally infrequently - my life is almost perfect. 6%
o Infrequently - my life has hiccups, like anyone´s. 24%
o Occasionally - I´ve had some hard knocks. 13%
o Annually - I have seasonal affective disorder. 6%
o Monthly - I´m very sensitive to moonlight. 0%
o Weekly - I read too much news. 13%
o Daily - The news finds me no matter where I hide. 4%
o Hourly - I see the future and it´s worse. 18%

Votes: 65
Results | Other Polls

Related Links
o I
o Loebner Prize
o Alicebot
o Eliza
o SOUNDEX
o The Magical Number Seven, Plus or Minus Two
o Hilary Putnam
o functionalism
o More on Software
o Also by mindpixel


View: Display: Sort:
Jackie and the Brain | 126 comments (81 topical, 45 editorial, 0 hidden)
Cornell Agrees Mind is a High Dimensional Space! (none / 1) (#124)
by mindpixel on Sat Jul 2nd, 2005 at 12:55:52 PM EST
http://mindpixel.blogspot.com

A Cornell University press release says that the idea of the brain as a computer has outlived its usefulness. Instead, Michael Spivey and other researchers say, the brain is a "dynamic continuum, cascading through shades of grey". They propose that "perception and cognition are mathematically described as a continuous trajectory through a high-dimensional mental space; the neural activation patterns flow back and forth to produce nonlinear, self-organized, emergent properties". Exactly as I proposed in "Jackie and the Brain" and in my paper "Mind as Space: Toward the Automatic Discovery of a Universal Human Semantic-Affective Hyperspace--A Possible Subcognitive Foundation of a Computer Program able to pass the Turing Test" which I wrote in 2002. It is nice feeling to have experimental confirmation!

you're creepy (none / 1) (#119)
by ShiftyStoner on Fri Jun 17th, 2005 at 09:38:04 PM EST
http://www.cannabis.com/untoldstory/hemp_9.shtml


( @ )'( @ ) The broad masses of a population are more amenable to the appeal of rhetoric than to any other force. - Adolf Hitler
The Hypergeometric Hypothesis (none / 0) (#118)
by mindpixel on Fri Jun 17th, 2005 at 01:53:27 PM EST
http://mindpixel.blogspot.com

Here are my formal hypergeometric claims and suggestions for possible falsification.

Something more considered... (none / 0) (#114)
by mindpixel on Wed Jun 15th, 2005 at 02:57:44 PM EST
http://mindpixel.blogspot.com

Here is something that is a little more considered than the blog entry that the above K5 story was based on:

Mind as a Maximum Hypersurface

Trouble in first paragraph (none / 0) (#112)
by Alan Crowe on Wed Jun 15th, 2005 at 01:04:36 PM EST
http://www.cawtech.freeserve.co.uk/index.html

Given only a very large number of trainers, both Jackie and Alice appear to be human-like.

The point I wanted to make is that these chatbots are stateless. So as soon as you refer back to something earlier in the conversation you get a weak, generic answer. The natural flow of conversation is a question about the conversation: I thought I mentioned that earlier? The chatbot fails to cope and the illusion is shattered.

So I thought I would go to an online chatbot and work up a couple of transcripts. One would show how if you give it the old yadda yadda, monopolising the conversation and talking at the chatbot, the transcript looks normal, but if you try and engage it in a to and fro conversation the illusion fails.

Unfortunately, I don't have the knack of talking to chatbots. I cannot get a normal looking transcript out of a chatbot even when I'm trying to avoid tripping it up.

Here I try to trip up Alice.

Crowe
Hello
Alice
Hi there!
Crowe
I like arithmetic
Alice
A lot of people like that. Though I like cats and dogs. I like Don't Read Me.
Crowe
I thought I was the only one.
Alice
Where were you the only one?
Crowe
What did I say I liked
Alice
You said "What did I say you liked".
I find myself disturbed by having the conversation break down in the weird fashion.

I try again, hoping to avoid tripping Alice up.

Human: I like cats
ALICE: Me too cats are my favorite animal.
Human: It is the fur, and the purring
ALICE: I've heard of it before.
Damn, I screwed up. The mentions of "fur" and "purr" refer back to "cat" in my previous sentence. Alice doesn't remember that, so cannot cope. I cannot have a conversation within these constraints.

The basic premise of the article is that stateless chatbots are an interesting technology, worth building on. I take the opposite view. Taking to chatbots has convinced me that even the most basic conversation has a logic and flow to it that requires a short term memory containing the salient points of the whole conversation.



We need a new section in kuro5hin. (none / 0) (#111)
by Ward57 on Wed Jun 15th, 2005 at 11:58:47 AM EST

Troll.

Meta: We need a new section on K5 (none / 0) (#110)
by Viliam Bur on Wed Jun 15th, 2005 at 07:55:03 AM EST
(alexander2000@post.sk) http://www.fantastika.sk

"Pseudoscience"

Layers? (none / 0) (#108)
by Sgt York on Tue Jun 14th, 2005 at 06:16:54 PM EST
(sgt_york@hotmail.com)

Interesting story, especially the part about the phenomes. It opens the door to some interesting ideas in signaling, actually. A bug becoming a feature, in a way.

A question, though : What are these 6 physical layers of the brain you're talking about, that somehow exclude the thalamus? The thalamus has long been known to be a waystation for most signals to/from the body. "Thalamo" is found in the name of many of the afferent tracts. IIRC, the only root that's more common is "spino". It makes little sense that any modern discussion of layers of processing in the brain would exclude that structure.

What are the other 6, anyway?

As for the Neandertal comments, why do you assume they were less intelligent? They were outcompeted because they did not properly adapt their behavior to climate change. This does not mean that they were stupid, though.

There is a reason for everything. Sometimes, that reason just sucks.

K5 Science Page Coincidence (none / 0) (#104)
by mindpixel on Tue Jun 14th, 2005 at 03:34:31 PM EST
http://mindpixel.blogspot.com

On the same page two stories earlier than this story appears the story AI Breakthrough or Mismeasure of Machine that is based on the work of Peter D. Turney, who was the editor of Canadian Artificial Intelligence in 1995 who accepted my paper, The Minimum Intelligent Signal Test: An Objective Turing Test, which I wrote because of my experience with Jackie.

er (3.00 / 2) (#103)
by eschatron on Tue Jun 14th, 2005 at 11:27:30 AM EST

I'm not really sure if this is a troll or not because you seem to have devoted so much effort, but the flaws are magnificent.

Without getting into a point-by-point battle, the kicker is this. Even assuming that your 7d hypersphere idea is significant, placing a novel stimulus on that sphere is the main cognitive task. This is something that has been learned the hard way over and over again. It's not just problem-solving that's hard, but figuring out what's a problem in the first place. It's not just answering things that's hard, it's figuring out what's been asked. The idea that SOUNDEX plus alphabetic sorting is going to get anything to the right coordinates is just absurd. You're basically suggesting that those two things account for about half of cognition.



Read the criticism on Miller (none / 0) (#102)
by pak on Tue Jun 14th, 2005 at 07:03:05 AM EST
(zur-spam@edu.lahti.fi) http://www.edu.lahti.fi/~zur/

Your model seems a bit silly if the only thing it´s going to account for is the size of working memory. Secondly, I think Millers results are questioned even in the basic Psychology 101 canon nowadays. If you are to write about working memory, you might want to read the current literature about it. :-)

principal component analysis (none / 1) (#101)
by schrotie on Tue Jun 14th, 2005 at 05:55:40 AM EST
(schrotie at uni dash bielefeld dot de)

The state space of a neural network with n neurons is commonly considered to be nD (i.e. it has n dimensions). But the network is heavily linked and the activity of neuron depends on the activity of other neurons. Thus the actual dimensionality is usually (depending on the weight matrix) lower.

I don't see any relation between the supposedly seven layers and seven dimensions of state space. A layer has two dimensions. And it is rather far fetched that the state of a whole cortical layer could be reasonably represented in 2000D, let alone 2D.

I'll give an example of research done in my department. The human arm has nine degrees of freedom and operates in 6D space (3 translational degrees of freedom and 3 rotational). When using principal component analysis to analyze the actual movement data of certain reaching movements it turns out that most of the data (don't know the number, over 95% I think) can be covered with three independent dimensions. This does not cover the numerous (21?) muscles in the arm that can be used to control the nine joint degrees of freedom.

However, these three dimensions have to be represented in the brain somehow. We have two arms. They might be coupled in many situations, but it is rather unlikely that both arms are together represented in 3D. And then there are the hands and fingers which are not considered in this setup. And there are legs and eyes, neck, facial expression. Dozens of joints, hundreds of actuators (muscles), plus another brain sized neural network in the stomach. I very much doubt that the state of the human body which is very likely represented in the brain can be represented in seven or less dimensions (well, If you push it you can represent every state of every system as a scalar, but what's the point?). And I did not even consider other mental representations (language, social relationships, math, whatever). You can represent the state of the mind as a point on the surface of a 7D object. You can also represent it as a scalar. But if you want a meaningful representation of the dimensionality of the mind's state space you have to take a complete (as complete as possible) sample of actual brain states and do something like principal component analysis on it. If you end up with a number of dimensions you can still count in a reasonable time (let alone 7) you'll see jaws dropping right out of their sockets in neurolabs everywhere.

The number of cortical layers is not obviously related to the number of dimensions of the state space of the brain. The state space of conscience (whatever conscience is, please spare me that discussion) is very likely a minuscule subset of the state space of the brain. But why it should be 7D eludes me. Especially since I can pick a lot of random representations out my of brains state space and make them conscious. I can e.g. make the state of any joint and many skeletal muscles conscious. And 6D are already reserved for representations of position and rotation of objects in real space. Imagine a red saxophone hanging in a box in midair and playing summertime. 6D for position/orientation, 1D (at least) for color and 1D (at least) for musical note. The music gets louder - 9D ...

Democracy is the recurrent suspicion that more than half of the people are right more than half of the time.
E. B. White


not AI and not science (3.00 / 3) (#100)
by nml on Tue Jun 14th, 2005 at 04:03:02 AM EST

i don't mean to be negative, but there are a number of things seriously wrong with this proposal. Firstly, soundex sucks. What surprised me when reading through this was that your reaction to having your program misinterpret the sushi question. Replying with a vague falsehood, while superficially satisfying, only worked because the trained answer was also vague. If the question had have been 'what do you like about sushi?', there are an awful lot of sex/sushi trained answers that would land your program in trouble, where as a normal person wouldn't have any trouble with that question. So i don't think you have presented a very good case for this approach working well in general.

As for your hyperspatial sphere stuff, i fail to understand what the dimensions of the sphere would represent. Text/phonic data is notorious for being high-dimensional, but you're planning to fix the number of dimensions to 'maximise the surface'. How does the data map onto seven dimensions? And you haven't even bothered to fully specify what you'd do with such an index once built, only noting that 'interpolation is hard'. The rest of the article descends into some incoherent and quasi-mystical babble that mentions the number seven as often as possible. Blech.

Balderdash! (3.00 / 5) (#94)
by MMcP on Mon Jun 13th, 2005 at 01:00:56 PM EST

Everyone knows that humans are Cubic forms that rotate a 4 corner face lifetime!

Clarifications? (3.00 / 3) (#92)
by jonradoff on Mon Jun 13th, 2005 at 11:57:08 AM EST
(jonradoff_dontspam@yahoo_nospamkthx)

An interesting idea.  Can you answer a couple of clarifying questions?

First, when you say that "mind is a fractal self-organizing semantic-affective resonance map on the surface of a seven-sphere" do you mean "semiotic" in place of semantic?

Do you derive an exponent of seven by conjecturing that any semiotic object may be represented by seven parameters (thus, seven dimensions) at any given time?

I don't really see why you lean toward a hypersphere for the topology of the mental-map.  Couldn't it just be an arbitrary 7-dimensional object?

The psychological research that underpins your 7-dimensions argument implies that it might be as few as 5 or as much as 9 for some individuals, and this isn't really represented in your idea.

Also, just because the brain (neurologically) has a certain number of discrete structures doesn't seem to corellate to anything.  I'd suggest that the real structure of the brain is in the synaptic conections between the neurons, not the higher level abstractions.


I voted -1 (3.00 / 3) (#74)
by oddity on Sun Jun 12th, 2005 at 11:13:04 PM EST

because it makes no sense. But resection it to humor and I say +1 FP. This is funny stuff.

Let me see if I understand what you are saying (none / 0) (#46)
by pHatidic on Sun Jun 12th, 2005 at 06:32:46 PM EST
http://www.alexkrupp.com

1. There are seven parts of the modern human brain, while neanderthals only had six parts. This could explain why modern humans are smarter, even though their brains are smaller. (Assuming that neanderthals would still be here if they were as smart as modern humans). 2. The sphere thing is relevant because you need to be able to measure the distances between points in order to determine distances between points. While there are many more than seven dimensions, you can only measure the distances between points in the first seven, so these are the only ones relevant to fuzzy logic or neural net logic. 3. If the modern brain has seven parts, each part could store data about a different dimension of the hypersphere. 4. Neural nets can take into account the distances between nodes in seven dimensions, to determine what is most likely to be true when there is no perfect match between input and stored memory. 5. When you tried this with Jackie, it was surprisingly humanlike, leading you to believe that the brain might also process data in a virtual seven dimension hypersphere.

One is a genius (3.00 / 5) (#45)
by Novelty Account No 59671 on Sun Jun 12th, 2005 at 06:24:55 PM EST

The other's insane

...okay... (3.00 / 4) (#37)
by Back Spaced on Sun Jun 12th, 2005 at 02:03:14 PM EST

Our immediate memory [to use Miller's term] is about seven items long - that is we can recall easily about seven unrelated items from a larger list of items that we see or have had read to us.

Okay.

Thus, we can imagine Jackie's phonetic index to things people can store in their immediate memories as complex fractal pattern on the surface of a seven-dimensional hypersphere.

...and that is a complete non-sequitur. I think that memory is likely stored throughout the whole cortex, but the link to Jackie or "seven items" is not clear.

Bluto: My advice to you is to start drinking heavily.
Otter: Better listen to him, Flounder. He's pre-med.

+1, FP! (2.00 / 2) (#36)
by elver on Sun Jun 12th, 2005 at 01:33:22 PM EST
(howard@monkeyblah.com) http://elver.cellosoft.com

You, Sir, are the true new Douglas R. Hofstadter! I haven't read anything this fascinating on the subject of AI since "Gödel, Escher, Bach: An Eternal Golden Braid".

If you can, pop a link to your writing over to Hofstadter and see what he thinks of it. Me, I'll be looking forward to Your book.



Dimensions (3.00 / 4) (#35)
by Morkney on Sun Jun 12th, 2005 at 12:23:56 PM EST

The hyper-surface area of an n-dimensional sphere is (n-1)-dimensional. So the maximal surface area is given by a 6-dimensional surface.

This can be confusing, since n-dimensional sphere sometimes means "(n-1)-dimensional sphere in n-dimensional space" and sometimes means "n-dimensional sphere in (n+1)-dimensional space." However, mathworld explicitly states that the former meaning is used in this case (the "Geometer's sense").

As a check, you can see that on the graph given by the mathworld page, n=2 has a value of ~6 - obviously corresponding to the one-dimensional surface of length 2*pi.

This guy is dead serious (2.50 / 4) (#29)
by StephenThompson on Sun Jun 12th, 2005 at 07:11:55 AM EST

First time I read this I thought it was pretty much gibberish.  Then I read it again, and went to mindpixel's blog.   His blog is filled with these ideas http://mindpixel.com/chris/, and he even wrote a book chapter about it http://www.mindpixel.com/PDF/mindasspace.pdf.

There is no doubt that he is serious.

He has made huge leaps of intuition here, and they may be way off the mark.  However if you read his other stuff you will see that there is more foundation for his idea than appears here.

Fascinating.

I don't think it's a troll (3.00 / 11) (#24)
by OmniCognate on Sun Jun 12th, 2005 at 05:54:27 AM EST

It's a bit off the wall, certainly, but I don't think it's a troll.

There are a couple of bits that really don't make much sense, like

I could build a massive soft phonetic index and use it to train a neural network to extract the underlying phonetic space and make a true synthetic subcognitive substrate
but I don't think the author can quite be accused of randomly throwing words he doesn't understand together.

It is true that a 7-dimensional hypersphere has maximum hyper-surface area (the terminology is awkward here - it isn't an area, it's kind of hyper-area of 1 less dimension than the sphere itself). It's explained on mathworld. A unit circle has circumference 2*pi. A unit sphere has surface area 4*pi. Apparently, this number increases to a maximum at 7 dimensions and then recedes.

The solution to the equation does give a maximum at 7.25695, and there is such a thing as fractional dimensionality. Whether it can be validly applied here I don't know, but there certainly is such a thing, and it does appear in the mathematics of fractals.

Switching between a 7-dimensional sphere and 7 layers is not particularly odd. The 7 layers are the 7 layers of cortical tissue. The seven dimensions refer to an abstract entity the author thinks the brain is representing. Presumably he is claiming each layer represents a dimension.

There are indeed 6 layers of cortex. I don't know whether anybody considers the thalamus a 7th. In some areas of the brain, particularly the visual cortex, the division of responsibilities between the layers is reasonably well understood and it doesn't look like what the author is describing, but he isn't just making this up.

The article is very coherent to begin with and does describe reasonable techniques for implementing a toy AI.

At one point he mentions using a neural network to interpolate between values. This is a valid and realistic use for a neural network.

There are some excessively big logical leaps in the second half of the article, and I can't say I agree with what the author is suggesting, but I do think he has an idea which he believes has value. I also think he understands the concepts he is referring to a lot better than some people are giving him credit for.



What a fascinating article (2.80 / 10) (#19)
by givemegmail111 on Sun Jun 12th, 2005 at 04:31:16 AM EST

It almost makes sense for the first few paragraphs, then degrades into abject nonsense so gradually you don't even realize you're being trolled. Of course what's really fascinating is the number of people who voted this up. I can only suppose they only read the coherent beginning paragraphs and skimmed the rest. Either that or they're making some sort of ironic "vote for the obvious troll" statement that's completely gone over my head.

Milnor example (none / 1) (#15)
by SaintPort on Sun Jun 12th, 2005 at 03:09:47 AM EST
(webmaster%40saintport%2Ecom) http://www.SaintPort.com

and as an aside I must remind everyone that...
the word 'hrair' is used for any number greater than four

the Milnor example link...
http://www.gang.umass.edu/reu/2000/curve09.html

--
Search the Scriptures
Start with some cheap grace...Got Life?

I suggest that there is real meaning in that (none / 0) (#14)
by SaintPort on Sun Jun 12th, 2005 at 03:01:39 AM EST
(webmaster%40saintport%2Ecom) http://www.SaintPort.com

six (6) is the number of man    think 666

and seven (7) is the number of God.


--
Search the Scriptures
Start with some cheap grace...Got Life?

20Q (none / 0) (#12)
by adimovk5 on Sat Jun 11th, 2005 at 11:57:10 PM EST

Twenty questions is an internet based game that relies on the responses of users, although the choice of responses is limited. The game uses user responses to filter its answers and make guesses. Because it is based on users responses and not on an exact database, it has the ability to change with time and always remain current.

OMG (2.25 / 4) (#10)
by Kasreyn on Sat Jun 11th, 2005 at 11:52:55 PM EST
(screw email, AIM me or post a reply) http://www.livejournal.com/users/kasreyn

from your website:

I am a Hacker. Not reformed.

TERRORIST!!!

The real question is, can Rick Santorum pass the Turing Test?


"You'll run off to Zambuti to live with her in a village of dirt huts, and you will become their great white psycho king." -NoMoreNicksLeft, to Baldrson
Interesting (3.00 / 3) (#3)
by vadim on Sat Jun 11th, 2005 at 11:11:28 PM EST

But I still believe this approach is fundamentally mistaken.

I'm very interested in the subject myself, and I'm getting more and more convinced that if we want to produce a good AI it's going to have to pretty much emulate a living organism. Just something that parses strings isn't going to be good enough.

The main problem as I see it is that we humans are very dependent on our environment. In order for a computer to know what a cat is it would first have be able to perceive the 3D environment where the cat moves, and actually see what one looks like. I'm fairly sure that the attempts of building big databases are fundamentally futile, especially since that pretty much all definitions and hierarchies are arbitrary and overlap.

The solutions I see to this: Build it as a robot in the real world, or build it as a bot in a 3D game.
--
<@chani> I *cannot* remember names. but I did memorize 214 digits of pi once.

Needs edits (none / 0) (#1)
by forgotten on Sat Jun 11th, 2005 at 11:07:17 PM EST
(allisforgotten at gmail)

Or at a minimum some clarification. I didnt see this in editing, maybe it just passed through really quickly, but here are some things that confused me:

'maximum hypersurface': i thought you were talking about spheres?
'fractal': why did you use this word? I dont think you are using it correctly.
'layers' and 'dimensions': again you are using these terms interchangeably, when they are not.
large brain size: once a brain reaches a certain critical size in animials, there are other more important factors that determine intelligence.

--

Jackie and the Brain | 126 comments (81 topical, 45 editorial, 0 hidden)
View: Display: Sort:

kuro5hin.org

[XML]
All trademarks and copyrights on this page are owned by their respective companies. The Rest © 2000 - 2005 Kuro5hin.org Inc.
See our legalese page for copyright policies. Please also read our Privacy Policy.
Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
Need some help? Email help@kuro5hin.org.
If you can read this, you are sitting too close to your screen.

Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories! K5 Store by Jinx Hackwear Syndication Supported by NewsIsFree