Wednesday, 15 June 2016

Ecological Representations

Funny story. One day, I got a text from Sabrina that said "Holy crap. I think ecological information is a representation." "Uh oh", I thought - "Twitter is gonna be maaaaad". Then we thought, "we should probably write this idea down, see if we can break it". So we wrote, and the funnier thing all just got stronger. The result is a paper we call "Ecological Representations", which we have just uploaded as a preprint to BioRxiv.

In it, we
  1. argue that Gibsonian, ecological information meets the criteria to be a representation, then
  2. predict that this information leads to neural activity that preserves its structure and that also meets the criteria to be a representation, and
  3. we argue that these two ecological representations (informational and neural representations) can address the three core reasons cognitive science wants representations (getting intentionality/aboutness from a physical system, solving poverty of stimulus and enabling higher order cognition) while 
  4. avoiding the two big problems with mental representations trying to address those motivations (symbol grounding and system-detectable error). 
  5. We then spend a bunch of time getting serious about higher order cognition grounded in information (see, I told you we were working on it!)
We submitted this to a good cognitive science journal and got the reviews back last week. We were rejected after hitting a wall of confusion from two reviewers who got distracted by side issues and one who just didn't quite get it. No-one gave us much in the way of specific actionable things that need fixing, nor did anyone actually say our analysis of information as a representation was wrong. We remain unimpressed with the quality of the reviews (although the editor has been very clear and generous with his time in replying to us querying the rejection). 

That said, we are taking one hint from the process before submitting elsewhere, and that is we are clearly having trouble articulating the argument, in part I think because comes out of left field and we're tripping a lot of different knee-jerk reactions. We think the story makes sense but then we're us, so what we need is some fresh eyes. This is where you lovely people come in.

I have made some minor structural revisions to the version that got reviewed to address some of the issues that came up and I have uploaded it as a pre-print to Now, we want your help.
One quick thing to flag up; just this week, there's been a special issue on embodiment and symbols and meaning and things at Psychonomic Bulletin and Review. There's a bunch of this material that will likely be relevant to building the case that this paper is useful; we have not had a chance to get into it yet, but we will!
  1. The problems from the reviews boiled down to "we need more background on the ecological stuff" and "this is all a bit old fashioned mental representation talk, we've moved on'. Our initial reply is "sure, no problem" and "you may have moved on, but this really is the core of the matter and the problems remain unsolved, so there's that". Our questions: what remains unclear from the ecological psychology background in the paper and what is this mythical recent literature on the nature of mental representation that solves or defines away these issues?
  2. Does our argument work? Are we just making things up or is information really doing the representational work we think it is? 
  3. Do the implications we lay out follow? 
  4. Just how much do you disagree with our (very early days) framework for using this analysis to get information supporting higher order cognition? More importantly, what do you think is wrong?
One thing to get out of the way here; we know that the word 'representation' has been checked out of the library by many different people. We've tried to aim for the most basic definition of representation that gets at what all these versions are trying to deal with, and landed on the idea that if nothing else, a representation must stand-in for something else in a way that gives you intentional, functional behaviour of all kinds. We still think mental representations don't exist, but the need for things standing in for other things to support behaviour seem to real; this is just an ecologically grounded version of solving that problem.

So, please: spread the preprint around, read it, comment it, dissect the hell out of it. Post comments here, blog it yourself, find us on Twitter, or email us your thoughts. We appreciate all the interactions we have via the blog and Twitter and we really want to know what we can do to make this paper good. It's a fairly important part of a bigger project, and it's important that we get this right. For all their flaws, the reviews told us we aren't telling this story clearly yet, and so it's on us to find out why not.

Golonka, S., & Wilson, A. D. (2016) Ecological Representations. Preprint uploaded to doi:10.1101/058925


  1. I'm struggling to understand what you're saying here, and it isn't helped by the typos and elisions in your text. Proofreading can often be instructive, and it requires real concentration on how things are read.

  2. Sabrina and Andrew,
    I've managed to read the full thing, once (in my lunch breaks!).
    First quick comment: I think it's a very important move in exactly the right direction, kudos! (I can't stress this too much: you are doing exactly what I would have liked to do myself, if life didn't lead me in other directions.)
    However, I also see one or more potential deal-breakers: I'm unsure whether I can condense my worries in one overarching point, I'll know if I can only after trying.
    Naturally, this comes from my own very peculiar POV, I have a (formal) background in biophysics, neurobiology and molecular-biology, by contrast my preparation in psychology is strictly DYI and not Gibsonian.
    Given the potential I see in this, I'd be very happy to produce detailed feedback (with no strings attached, naturally) - it's going to be long and dense!
    Which brings me to one question: how would you like to receive it?
    Options I see:
    1. Email. Simple and direct.
    2. In here. This looks cumbersome because of length-limits on comments.
    3. Via a post on my blog (no limits on comments).
    Since it's your work, it's entirely up to you to evaluate pros and cons.
    Do note that I'll be using my spare time, so I will be slow in all cases!

    1. Hi Sergio

      Any format that suits you is fine. In public on your blog might be the most use, we really do want a conversation about the ideas. But we'd love to hear your thoughts and we are happy for those to show up in the way that suits you best!

    2. I realised just after clicking "publish" that I was taking it for granted that you'd be interested, whoops! Glad to hear I wasn't mistaken.
      My blog it is (easier for me, and glad to keep it open). Will post a notice here when done (please don't hold your breath).

    3. Done.
      The result is blipping long, dense and somewhat repetitive, apologies for that.
      You can find it here.
      Hope you'll like it, but looking forward to all feedback, especially if you disagree.

  3. Beware of supposing that something that can be regarded as a representation is actually a representation. Representations are indeed stand-ins but the vital thing to bear in mind is that only creatures capable of accepting stand-ins are capable of producing and using them. The environment does not produce representations or information. It can be regarded as doing so, but we must be very careful to avoid the assumption that the content we can ascribe to things is a de facto attribute/property of them.

    1. The thing about Gibsonian information is that it is a stand-in for the environment that was produced by the environment itself. Ecologically, the environment produces both information (that's Gibson) AND representations (that's us in this paper).

      Is there a literature reserving the word 'representation' in cognitive science to things created and used by organisms?

      We do get the concern about mistaking a description for the thing described - but working hard to avoid exactly this is baked into the ecological approach, actually, and it's something we work hard to avoid. If you think we've done it somewhere, though, please let us know!

    2. Thanks Andrew. I know that the term "information" is intended to be free of representational commitments, but ask yourself this. Can the required usage in any particular circumstance be replaced by "causal efficacy" or does the influence within the system you are seeking to explain necessarily depend upon some form of signal production and/or reception? If the former is the case, then no representation need be assumed. But if the latter is the case, then representation (and semantic information no less) is inevitably implied.

      Sadly there is not a literature reserving the word 'representation' in cognitive science to things created and used by organisms, no. This does not mean that the usage is coherent though. Representations are always intentionally produced. There is no such thing as an unintentional representation. This is why only intelligent creatures use representations. As Ramsey (2015) puts it: "Representational Theory of Mind is slowly becoming the 'causally relevant to the processing theory of mind'—an utterly vacuous outlook."

      Luciano Floridi also puts it well:
      "In philosophy, this means that virtually any issue can be rephrased in informational terms. This semantic power is a great advantage of PI, understood as a methodology [...] This shows that we are dealing with an influential paradigm, describable in terms of an informational philosophy. But it may also be a problem, because a metaphorically pan-informational approach can lead to a dangerous equivocation, namely, thinking that since any X can be described in (more less metaphorically) informational terms, then the nature of any X is genuinely informational. [...] A key that opens every lock only shows that there is something wrong with the locks." (2013 p.16)

    3. Representations are always intentionally produced. There is no such thing as an unintentional representation.
      One quick thing; this isn't quite right. For example, the whole point of the symbol grounding problem is that you can build non-intentional representations and that that is a problem! Am I missing something here?

      Also, note: when we say information we mean Gibsonian information, specifically the kinematic patterns present in things like the optic array. We aren't talking (yet) about Shannon information, etc.

    4. \\the whole point of the symbol grounding problem is that you can build non-intentional representations and that that is a problem! Am I missing something here?//

      I think you are indeed missing something, yes. What is the action of building after all if not an intentional activity? As I see it, the symbol grounding problem relates to how symbols get their meaning, not to the claim that we can build non-intentional representations. Could you give one or two examples of these non-intentional representations?

      Apologies if I seem hard headed about the necessary intentionality of representation. It’s important though, as I’m sure you recognise. Let me put my case as simply as I can. Either inner representations are intentional or they are not. If they are intentional, then the intention to produce them must also be intentional thus requiring an infinite regress of intention motivating representations. If they are not intentional, then we have no means of explaining intention in the first place. As far as I am aware there is only one robust solution to this dilemma, which involves—amongst other things—the rejection of inner representation altogether. All representations require an audience, at least in principle.

      In your paper you write: “These patterns are created by the lawful interaction of the energy with the dynamics of the world and are used by organisms to perceive
      that world.” By “patterns created” you clearly mean regularly occurring (and therefore adaptively exploitable) causal influences. If you mean anything more significant (like units etc.) then we are straight into the symbol grounding problem. When you say that organisms “use” these patterns to perceive the world you can only mean that they are influenced by them. You surely do not mean that all organisms are tool-users who purposefully exploit environmental resources. So it is not true that organisms use patterns to perceive the world. What organisms do—simple organisms that is—is respond efficaciously to regularly occurring causal influences that we symbol-users find it useful to call “information”.

      There is a lot more to say but I will leave it there for the moment because I don’t want to outstay my welcome.

    5. You are more than welcome, this is the kind of feedback we are looking for :)

      The problem you describe with inner representations is bang on, we agree and it's in the paper when we reject mental representations. The ecological neural representations we discuss are 'inner', but inherit their intentionality from informational representations, dodging this bullet.

      What organisms do—simple organisms that is—is respond efficaciously to regularly occurring causal influences that we symbol-users find it useful to call “information”.
      This sounds ok, except a) we'd argue all organisms and b) we try to do a lot of work to justify why responses to information is efficacious. In other words, there are reasons why responding to information leads to functional behaviour, and those reasons have to do with the lawful process that creates kinematic patterns that specify behaviourally relevant dynamical properties. So the work that goes into justifying how 'patterns' come to 'information' is a big part of this, and one very sensible way to verbally describe all this is that organisms perceive their environments via the detection of specifying information.

      Couple of small things:
      If you mean anything more significant (like units etc.) then we are straight into the symbol grounding problem.
      I don't understand this; what do you mean by 'units' here?

      re non-intentional representations, the only thing I was thinking is that the symbol grounding problem implies that mental representations as conceived by cognitive psychology might be such things. This may or may not be an important point:)

    6. I have started to write a more detailed response that I will post on my blog. I'll drop a link here when I do.

    7. Yes please! Thanks for putting in the time, we appreciate it :)

    8. Well here it is. I haven't pulled any punches I'm afraid.

    9. Thanks! We'll get into it next week when Sabrina is back from the mechanism conference. Looking forward to reading it! Don't worry, we have thick skins :)

  4. Thanks! I'm really looking forward to getting into this. I've just read the beginning and I'm very interested - I've had a pet idea about reconciling gibsonian and Shannon information, but I've not worked out the details. I'm sure we'll have lots to talk about over on your blog

    1. My pleasure entirely.
      I don't have professional stake in the business, but personal/internal motivation is even more powerful: I really wish you to publish this paper, and publish it well!
      Re Shannon's info, you may want to start with this: what the hell is information anyway?
      Where I mix my scicomm intentions with my own ahem, "interpretations". I mention it here because I guess it may make my commentary easier to follow (I think it's linked in there as well).

  5. It seems a tad arbitrary and unfortunate that you would use Newell's definition for what a representation consists in; especially when for all appearances he looks like a dyed in the wool Representationalist. I thought you'd be on Dreyfus' side of the fence regarding what Newell was proposing in his “Physical Symbol Systems” paper?

    1. What we propose is largely consistent with Dryefus, at least in terms of critiquing what behaviours are thought to require representations (e.g. we don't think internal reps are required for skilled movement and we don't argue that functional behaviour is caused by stored internal representations of past experience). We also agree with content critiques regarding mental reps.
      We chose this definition because it's minimal (basically all mainstream accounts of mental representation involve designation), it overlaps with how the word representation is used more widely in science, and it also formed the basis of bechtel's critique of non representational accounts of congnition (e.g. Van Gelder).
      Whether that was a good choice or not, the main question is whether our contention is correct that ecological information qualifies as a representation according to this definition.
      There is a larger debate to be had about the necessity (or not) of representation for given behaviours. The majority of the community accepts them in some form and we think we have shown a way to make representational claims objective and immune to 2 major problems (system-detectable error and symbol grounding). Actually carrying through our analysis and applying it to given tasks will yield something very, very different to mental representational accounts in a way that preserves (we think) the lessons from various arguments against representation. But, that's another paper!