Of course you can. There is o need to invoke degrees. You could just say 'the amount of forking'. If you took a bright child of eight and took them through the proof just saying that the total amount of forking of all the angles of a triangle must be the same as the forking between the half of a straight line going left and the half going right - which is what we are wanting to prove, regardless of any degrees, - I think they might well suddenly say "yeah, cool, it must be." Another example is Penrose's demonstration that 3 times 4 must be 4 times 3. You imagine a rectangle made up of 3 by 4 tomatoes. And you see that whatever number there might be each way the total has to be PxQ = QxP. A child could understand that even if they cannot count to more than three.
That is the 64,000 dollar question. Leibniz had the idea that all ideas are actually the way internal numbers seem to us. That is hard to grasp but the great thing about Leibniz is that he worked by reductio ad absurdum and very often came up with the right answer to such deep questions. My thought is that all ideas are the way the geometry of dynamic relations in nerve cells seem to us. Evolution selected out a DNA sequence for us that encodes biochemical machinery that builds a brain that can set up all sorts of geometries to intracellular dynamic relations and use those to paint pictures of the world. Because the rules of the cell geometry allows the pictures to relate systematically in a way that reflects systematic relations between events in the outside world these internal pictures work brilliantly for logical thinking. As Jerry Fodor put it we all know what a carburettor is before we see one - we just need to learn that that internal picture pattern is called carburettor. Plato says the same in Meno if I remember rightly. So a brain without any data through sensory organs could in theory come to realise that the sum of the angles of a triangle is always the same. It would no doubt be helped along by some internal pictures or 'impressions' but these impressions do not in themselves provide the knowledge. The generalisation requires something else.
I agree that if you were actually showing those things to a child, who doesn't have the logical impressions to deduce it from basic axioms, you would produce pictures and diagrams. But in doing so are we not using basic impressions about sight data to show that two things are the same? I admit that it is difficult to know if the knowledge about PxQ = QxP comes from 'reason' or if it comes from the impressions about the size of those squares. Maybe that isn't even a meaningful question, and one that is only asked because we don't understand how the mind works.
I think this is where we fundamentally disagree. I don't think we know what a carburettor is before we see one. I think we see it and then that new information is stored in the brain. Not that the information was there before and we just discover what to call it. I don't pretend to understand it but your theory on ideas being geometric relations between nerve cells sounds interesting. Although, why would DNA be selected to build a brain, if DNA was around long before brains? Surely DNA evolved only to transmit information because when it did that it was more likely to keep doing that.
We use impressions to get an idea of what the question is about but the sense that the proof must be true is not based on any particular impression. It is a sense that, for all those triangles we have never seen, it must be so. And when going through the proof on an example, once you get the idea you don't bother to carry over any precise impressions. You don't actually do any precise comparing in you head. You realise that all you need is to argue in relation to 'whatever is the other side of the fork between those two lines'.
Fodor's example is extreme and on one level implausible. I am not sure I quite believe he meant that. But he was voicing the idea that our brains come with pattern options for every impression already in hand, with meanings already determined. Most theories of mind make no attempt to explain how this could be and no theory of how a computer works can explain how it could come with internal meanings ready to hand. But Leibniz could see that this sort of thing must be the case. So a brain comes like a calendar dial where you can change the numbers and letters for days and months by turning wheels. An option for every day is already there but, much more importantly, the brain already knows that Dec 25 is Christmas and Jan 1 is New Years Day. We know that lots of animals recognise predators without ever having seen one before - snakes, hawks, lions. Almost certainly we do too. A female cuckoo returning to Europe in her first spring already knows that the sound 'cuck-coo' means that a mate is around. Moreover, she has no trouble knowing what it means to be a mate. She knows exactly what to do. So no way does all her knowledge come from impressions. Perhaps the most intriguing example for us is the size of the moon. Everyone thinks the moon looks bigger on the horizon. The reasons are complicated but to have a seemed size at all the moon needs to seem to be a certain distance away, and the same for the sun. The best theory I have seen is that we are born knowing not only what 'up' means and 'down' means (more innate knowledge) but that there is a 'sky' level where moons and suns and stars live, that is Millennium Dome shaped. That is to say, it is a rather flattened half sphere. DNA evolved and survived because it is very good at encoding blueprints for making systems that create more copies of DNA. Those systems have to survive in competition so the ones that survive best will be those that avoid being eaten by other systems that build more DNA. So DNA evolved to encode building of intelligent responsive animal systems. Brains are the bits that generate the intelligent responses. DNA also evolved to copy itself very very reliably but not completely reliably so that every now and again it could by chance encode a system that competes even better.
The bit that is really weird and almost nobody in contemporary philosophy tries to address in the way they did 300 years ago is that not only do brains have ready made options for every impression pattern they could meet but they know what they mean. Meaning is built in to the basic physics of neural tissue dynamics. Descartes understood that and said that we just had to accept this extraordinary truth. When signals arrive somewhere in a brain to create an impression the receiving unit knows what they mean. Nobody has any idea how so nobody has had any idea how to build this in to a computer so we build computers that are intelligent but know nothing about the meaning of what they compute.
When you say 'know what they mean,' what do you mean? In your carburettor example, you don't know it's a carburettor, you know that it's an object. You could use the same set of processing rules to know what any 'thing' means couldn't you? You don't need to have something ready made for every unique pattern. Maybe something like this: You've converted sensory information from photons presumably first into hue and shades, then fabricated a 3D coordinate system, and then partitioned from all that an individual entity that is the carburettor. Any other layers of fabrication on top of that, like its weight, temperature, how it might smell or taste, that it even has a name, or even what it might do in the context of an engine are probably a mixture of this prebaked processing of reality and learned experience over a lifetime.
But where did knowing what 'to be an object' come from? We assume so much of this is self-explanatory, but it isn't at all. The computers we build have no way of having an idea of 'an object'. Like John Searle's Chinese Room scenario, they simply move signals around according to rules. The carburettor example is too complicated - it was designed to be a rather flippant extreme case by Fodor, I think. Better perhaps to say that we know what a tube is before we get told it is called a tube. Our brains are wired so that they know the dynamic properties of it as a tube - that you could put things through it. The fact that this is not something we can assume is derived just from experience is that we find the brain gets things wrong. It has concepts like 'objects' and physics has made it clear that this is not a fundamental category of knowable world goings on. The knowable world turns out to be made of actions not objects. And time isn't what people think it is, at all. We are born 'knowing' about time but being wrong. Later in life some people get to know more about what time really means but it is still not entirely clear. And all these things like weight and smell are just symbols the brain uses to paint the world the way it was built to paint it, very usefully but rather wrong in various places. I think we have to posit that the brain uses systematic relations between patterns of intracellular dynamics to model similar systematic relations in dynamics outside and does it very well but in a way that uses short cuts that sometimes lead to things breaking down.
It might be that you mean this in a different sense, but the 'knowing' and the 'conceptualisation of an object from process information' are two completely separate things aren't they? In the latter case a computer can conceptualise in this way can it not? If you train a neural network on 10,000 photographs of numbers, the neural network will be able to take the raw information of pixel values from any photograph and abstract that into its best guess of what the number is. It will glitch out and get things wrong in just the way you describe a brain might do, looking for patterns where there aren't any and so on (like if you gave it a photograph of just white noise). In your biological postulation, this would more likely be achieved intercellularly, between neurons, rather than intracellularly? The former case is the big question of why there is conscious awareness of anything at all, which is the real mystery isn't it? Whether it's the higher level conceptualisation of what a tube is, or the relatively raw sensory level of physical sensation, the fact that there is perception there is perplexing.
Well, I could go on for hours about this but there is actually no such thing as knowing, as Descartes noted, but there are all sorts of possible meanings of nearly knowing or conceptualising. Knowing is probably more about being aware of a truth. That is likely to require conceptualising. I don't thin there is any sense in which a computer can conceptualise. 'A computer' is an arbitrary label we give to certain aggregates of events that tend to be closely associated in time and space but just what the limits are is unclear. To conceptualise something is to my mind the event of having some idea or concept manifest in some event here and now. This goes on in brains as far as we can understand, but where would such an event be in this aggregate of events we call a computer? Computers produce useful answers to questions about concepts that we have but I do not know of any sense in which there is an entity called a computer that need have these concepts. If, roughly as Ned Block proposed, a computer was instantiated by the members of the Chinese Nation sending each other emails of either 0 or 1 and moving the salt pot to the left for 0 and to the right for 1 and sending on another message depending on certain simple rules each person has been given in advance, in what sense does anything or anybody have a concept of whatever is being computed - maybe the significance of an experiment result on the length of Wapiti tails? 'Systems' in the sense of event aggregates like computers have no intrinsic metaphysical legitimacy that could endow them with an ability to have concepts as far as I can see. The most basic doctrine of conventional neuroscience is the neuronal doctrine. Every computation occurs separately in one neuronal at a time. There is no intrinsically legitimate 'system' in a brain either that could 'have concepts' or 'know'. All events that could support complex patterns encoding concepts must occur within individual cells because if a pattern is encoded in lots of separate cells then no event has access to all parts of the pattern so cannot know it all or have a concept based on it all. I know that this sounds heterodox but I would offer the following. 1. Leibniz knew it must be the case in 1695. 2. William James (the 'father of psychology') stated that it is the only analysis that is not contradictory in 1890. 3. At least three people in the last twenty five years have come up with the analysis entirely independently and through the same logic, including myself. Others have toyed with the idea off and on for 200 years at least. It is what I spend my time on when I am not here.
There is nothing perplexing about the existence of conscious awareness in a universe in which everything is calibrated in terms of its causal relation to - this conscious awareness here and now. All physics is an exercise in finding rules of regularity in the way distant events influence the content of cosnscious experience here and now. Einstein actually says this in a lecture in 1922, although he may not have been thinking in quite such fundamental terms. The 'Hard Problem' of why there is consciousness in a 'physical world' is an absurdity because the ultimate definition of physical has to be that which has the power to influence conscious experience. Without conscious experience there would be no meaning to 'physical'. Again, this is something Leibniz understood very clearly. Modern philosophers have lost the plot because they do not really understand what science is about. There are more interesting questions about what constitutes an individual 'event' that might be an experience. It has to be some intrinsically defined indivisible dynamic unit or unit of action, as Leibniz described - what he called a monadic unit. For our experiences it almost certainly needs to some very complex electromagnetic field/action interaction. But it has to be in an individual cell because that is only place where information is integrated in a complex way in a brain.
I should perhaps add, to @chillier, @Eddie, @EndME or anyone else interested, if you can follow all my above posts and see why they should make any sense you may be joining a select group of probably no more than 20 people worldwide who understand what Leibniz was about. They are a motley group of philosophers, neurologists, psychologists and others. All but one of us are getting a bit old though so that understanding may die again, as it did for about 150 years before Russell sort of re-discovered what Leibniz meant in 1900, although Russell himself failed to understand some of the key points.
I'll struggle to fully grasp what you're saying right now I think because I'm missing too much context. How do you define knowing. Does it imply a dualistic relationship between a knower and an experience? I take it to mean just the conscious experience itself, if there is experience then it has to be known by definition. It seems plausible to me that the idea of a knower is an abstraction of other experiences or information, just in the same way that a tube object would be. "All events that could support complex patterns encoding concepts must occur within individual cells because if a pattern is encoded in lots of separate cells then no event has access to all parts of the pattern so cannot know it all or have a concept based on it all." This sounds like the key point that I just don't understand at all, why is this a problem? I don't really understand why a conscious experience couldn't be an emergent property of many neurons firing over a time period - how would we know. I'm curious have you ever gotten into meditation? This can be a little alienating talking about this but a lot of these themes come up with serious meditation practice. In particular relevance here, and from my experience, once your concentration gets good enough if you focus on a sensation like a thought, emotion, vision or touch it will dissolve into vibrations. Each vibration arises and passes away at a variable frame rate (something like 10Hz but it changes), and they also seem to be one at a time (i.e if you're focusing on both of your thumbs you're actually experiencing a rapid flickering between them).
At first pass, yes, the knower is just the event of experience, if we have no legitimate cause to propose some 'enduring' knower that hosts these events. However, right at the heart of Leibniz's conception of the indivisible subject-as-perceiver-as-action is a problem with temporal divisibility. I think there is a technical solution to this in condensed matter physics but, yes, there is a mass of context here to cover first. In simple terms, once you get to higher-order modes of action in condensed matter based on asymmetries intrinsic to ordered structure you have two levels of individual, one a single event and the other a mode of acting that continues as long as the structure remains the same.
The simplest explanation of why comes in William James's Principles of Psychology 1890, Chapter 6, section on the Mind Dust Theory. He points out that an aggregate of events in several cells cannot be a single event, so cannot be a single integrated experience. In his terms it is 'not a physical fact'. Unpicking that systematically gets quite lengthy but most people in the field seem to accept that James's account expresses a more or less transparent truth that most people see when they read him. He says that trying to combine events into a single event generates a 'Combination Problem' and philosophers of mind have generally agreed that it does. Basically, either we say that an individual integrated experience is a single direct causal relation within some indivisible event or we claim non-locality of a sort that even quantum theory cannot breach or we allow violation of locality. If we allow violation of locality here then every physics experiment involves a non-local step and we have no rationale for making one step non-local when all the others obey locality. You have to invoke the sort of dualism that even Descartes did not really intend, although he rather set himself up to look that way. The philosopher James Blackmon has pointed out that if we wanted several neural events to combine into an experience or conceptualisation then they would either be parallel or in series. If parallel we have no principled reason to add up events in just one brain. In an embrace half of another brain may be as close to half of a first brain as its other half. If instead we invoke the idea that neurons in a brain are part of a 'system' with emergent properties we make that claim because events in neurons are related in sequence. And making two events in sequence a single event is the one thing you cannot possibly do because you crash the whole concept of causation. Something cannot be caused by half of itself causing the other half.
I have tested out various ways of playing with attention that include meditation but also just simple physiological tricks like fixating on an object with saccades, which one can do with a sort of 'effortless attending'. You can get all sorts of jolly effects, like tulips in vases disappearing off the table and yes, vibrations and waves. But to me these are all just signs of how one can play around with the 'vertical hold' on the TV input that provides the display patterns. I don't think it tells us anything profound other confirming just how much our experiences are internal patterns in a language that must use basic physics but has very little directly to do with the physics of what is being represented. Much of the dynamics in neurons is based on cycles around 10-40Hz. There are situations where you can play with that with visual or auditory inputs.
Just to add that this is of course hugely popular with neuroscientists who cannot accept the implications of their own neuronal doctrine. But 'emergent property' is a sort of hand-waving invocation that needs some justification. As James points out, you cannot invoke things that physics simply does not allow. Not just the physics we have, in fact, but our whole conceptual framework for causality.
So this logic would go all the way down then to the smallest possible unit of time in order for an event to be truly indivisible, which has something to do with planck I'm guessing? Even enzyme catalysis consists of multiple events, or the acts of successive ion channels opening. So it would have to be something which isn't an action potential, or the typical function of a protein - catalysis polymerisation etc. Something else which corresponds 1 to 1 with a conscious experience which lasts the shortest possible amount of time physics allows for? And this I suppose would be informed in some way by the normal firing roles of neurons in human physiology. If this is wrong it would be good to understand what is meant by an 'indivisible event.' In practice though you just don't experience things that quickly. It seems you experience things through a rolling shutter like you say, and relatively slowly, but where each frame represents one thing - a bit of one of an internal/external sound/image/feeling completely on its own. It also seems to flicker back and forth between whatever sensation has the attention and to a sensation often just behind the eyes which together makes it feel like you're a person observing a sensation from your head. This is all subjective obviously by definition, but does seem to be the common report of things.
Fair enough! Though I would defensively say that I'm not arguing that it is that - I think it's unknowable. Rather I'm asking why it definitely can't be that.