Bookstore

Quick Search

Author
Title
Description
Keyword
 
 
 
 

Booksellers versus Bestsellers

  • How to start a dark age and what myths should do for you

    by John MacBeath Watkins

    The term "dark ages" is not much used anymore, but it still conjures up notions of an age of ignorance following the fall of a great civilization.

    It was first applied to the entire Middle Ages in about 1330 by Petrarch. Light and darkness had symbolized good and evil, but Petrarch made them symbols of knowledge and ignorance. He saw his own time as one of darkness, and aspired to a time of greater light.

    That time of light arrived as the Renaissance some time later, the dawning of a time when people admired knowledge and it became more widespread. Then came a time when archaeology started digging up the "dark ages" and found a great deal had been known and accomplished in the middle ages, so now we seldom used the term for anything but the early middle ages.

    It's easy to put a starting date to the dark ages. Emperor Justinian closed pagan and Jewish school in 529 AD, and the dark ages began.

    The decree, as translated by James Hannam, reads as follows:
    We wish to widen the law once made by us and by our father of blessed memory against all remaining heresies (we call heresies those faiths which hold and believe things otherwise than the catholic and apostolic orthodox church), so that it ought to apply not only to them but also to Samaritans [Jews] and pagans. Thus, since they have had such an ill effect, they should have no influence nor enjoy any dignity, nor acting as teachers of any subjects, should they drag the minds of the simple to their errors and, in this way, turn the more ignorant of them against the pure and true orthodox faith; so we permit only those who are of the orthodox faith to teach and accept a public stipend. Justinian seems mainly to have aimed this at the Athenian Academy, which traced its (sometimes interrupted) existence back to its founding by Plato in the early 4th century BCE, but he also closed Jewish schools and schools run by those judged to be heretics.

    In so doing, he centralized power over what was deemed to be true. The decree made it illegal to teach things that were contrary to the teachings of the "catholic and apostolic orthodox church."

    There were Greek philosophers who had figured out not only that the earth was round, but had calculated pretty accurately its circumference. They knew that the rotation of the earth explained the sequence of day and night. Justinian didn't make it a crime for the great pagan scholars of his age to write and publish -- that came later -- but he shut down the Academy, leaving the scholars to make their own way.

    Hammon is a skeptic about the impact of this action. Many pagan documents survived, and were even taught in Christian academies.

    But the schools in the Eastern Roman Empire were survivors after the fall of the Western Roman Empire in
    Justinian476. Justinian was the last of the Latin-speaking emperors of the Eastern Roman Empire. Justinian sought to reconquer the territory that had been the Western Roman Empire, but failed. As the empire's grip over Europe failed, political institutions that had united it failed, and the only pan-European institution remaining was the Church. It became the dominant force in the preservation of knowledge and the maintenance of teaching institutions and traditions. And it demanded allegiance to what the Church believed.

    Some scholars and some texts made there way to Persia, and with the rise of the Muslim religion, schools that remained in Alexandria and Cairo fell into Muslim hands. Thus began the golden age of Muslim science and philosophy, early in the 7th century AD.

    The golden age of Muslim science and philosophy spanned from 750 AD to about 1100 AD. What happened then?

    The Incoherence of the Philosophers, that's what. The second-most influential Muslim cleric (after Muhammad) was a scholar named  Abu Hamid Al Ghazali, who wrote a book of that title published in the late 11th century. He argued against those Muslim scholars who had based their works on Plato and Aristotle were wrong -- essentially, heretical. The spread of his thought led to religious institutions that taught that human reason by itself cannot establish truth. Although Al-Ghazali himself had nothing against science, this in effect meant that if you really wanted to establish truth, you didn't go to a scientist or a philosopher who had devoted his life and efforts to learning about the thing in question. Instead, the final arbiter of truth would be a cleric who specialized in the Koran.

    This led to a decay of Muslim science and philosophy. Some would say, it led to a dark age for their civilization.

    This seems to be the way to cause a dark age: You simply give religion authority over establishing what is true of the physical world.

    Religion is in the business of delivering eternal verities, not of discovering new things. In fact, in such celebrated cases of the discovery of new things as Galileo's astronomy or Darwin's Origin of Species, religion has fought against new knowledge of how the universe works.

    Joseph Campbell, in Myths to Live By, wrote that religion or myth (the difference seems to be that myths are religious beliefs no longer in use) serves four functions:

    One, "to waken and maintain in the individual a sense of awe and gratitude in relation to the mystery dimension of the universe..."

    Two, "to offer an image of the universe that will be in accord with the knowledge of the time..."

    Three, "to validate, support, and imprint the norms of a given, specific moral order, that, namely, of the society in which the individual is to live."

    Four, "to guide him, stage by stage, in health, strength, and harmony of spirit, through the whole foreseeable course of a useful life."

    Can a religion that fails in the second function succeed in the other three? I doubt very much it can, because a failure in one area undermines faith in the truth of sacred knowledge in all the others. How could a church that taught the earth was flat have any authority after we had photographed the earth from the moon?

    But the Catholic Church did not remove Galileo's books teaching heliocentrism from the its  Index of Forbidden Books until 1758, and in 1992 the Pope announced that the church accepted that the earth moves around the sun. I can find no indication, however, of the verdict of the Inquisition against Galileo being rescinded. The committee Pope John Paul II appointed in 1979 had, by 1992, concluded that the Inquisition had acted properly by the standards of its day, although Galileo was right about the sun and earth.

    So, that's all right. Retard intellectual progress by a century of so, and it's all in good fun. In 2008, Pope Benedict XVI cancelled an appearance at La Sapienza University because some students and professors sent him a letter protesting the Pope's expressed views on Galileo. He was probably thinking, "why you talkin' 'bout old stuff?"

    It was the notion that there had been a dark age that gave people the notion to call the blossoming of knowledge and science the Enlightenment.

    The Counter-Enlightenment, which started not long after the Church took Galileo's books off the Index of Forbidden Books, has argued that the Enlightenment undermines religion and the political and social order. This is, in fact, the basic stance of conservatism since at least Edmund Burke. The term "Counter-Enlightenment," as I'm using it here does not refer to a single coherent movement with identifiable leaders, more to a wide span of groups and individuals who have argued against the goal of constant progress to new knowledge and a more rational society espoused by the great Enlightenment thinkers.

    They are probably right in arguing that the Enlightenment has undermined religion and the existing social order. After all, the Inquisition is a shadow of its former self, the church has had to repeatedly retreat on who is listed on the Index of Forbidden Books, and the most recent Pope has finally said that the beliefs of the Church do not conflict with the big bang theory about the origins of the universe or Darwin's ideas about the origin of species. It would be better if the church had not involved itself in such matters in the first place, but if it must make pronouncements about the nature of the physical world, it will have to change its tune when our knowledge changes or be undermined by new knowledge.

    We are still fighting this battle. Zealots want their religion's version of the origin of specie taught in public schools (they originated as God made them) and moral notions, such as whether it is better to condemn homosexuals or accept them, are being fought out as the culture changes. A church that has failed to distinguish between its core beliefs and issues that seem less religious than social must change or fail the test of providing a world view in harmony with the knowledge of the society to which it offers spiritual guidance.

    The Catholic Church is a handy way to talk about this, precisely because it is so well organized. But it is accompanied in its problems with the Enlightenment by people of many faiths. The easy way to deal with such problems used to be the one used on Galileo, tell the inconvenient person to shut up or die. But at this point in history, the world is changing too fast and the knowledge base outside the church is to big to be controlled.


  • Market power, monopsony and the porn industry

    by John MacBeath Watkins

    In a previous post, we discussed how changes in the music industry explain a bit of the Solow paradox, the fact the new technology is being adopted, but productivity hasn't seen much increase. Now we have another example of a way in which technology is suppressing, rather than increasing, productivity growth.

    It also shows how power can transfer wealth from one group to another in ways a free market wouldn't allow based on monopsony, the dominance of a buyer in the marketplace.

    The porn industry, once an economically vibrant part of the economy, has been devastated by changes in the business even as it adopts new technology. Porn stars once had a decent income from their performances, but now many have to work as prostitutes on the side to support themselves. It's a bit like the musicians who used to make most of their money from recordings, and now find they must get their living from live performances.

    Like the musicians, part of their problem is piracy. Computer technology allows the rapid and almost perfect copying of music and videos. As a result, many viewings of porn have been taken entirely out of  the economic sphere.

    But in the case of porn, there's another problem, the market power of the main distributor. The industry is dominated by Mindgeek, formerly Manwin. The company describes itself as being founded in 2013, but that's just when it changed its name back to Mindgeek after a period of being known as Manwin. Each name change came after its owners ran into legal trouble, resulting in the sale of the business.

    Mindgeek has something like monopsony power over the porn studios. They own an array of "tubes," the Youtube-like on-line distribution channels for porn.  They also own a lot of porn producers, and are essential for the distribution of the works of other porn producers. According to a recent Slate article, Mindgeek doesn't always pay the porn producers when they put up a video on one of their sites:
    Even content producers that MindGeek owns have trouble getting their movies off MindGeek's tube sites. The result has been a vampiric ecosystem: MindGeek's producers make porn films mostly for the sake of being uploaded on to MindGeek's free tube sites, with lower returns for the producers but higher returns for MindGeek, which makes money off of the tube ads that does not go to anyone involved in the production side. The result is that performers have to have sex more times to support themselves, performing for the videos and doing their "live" performances as prostitutes.But isn't more work for less money lower productivity as we account for such things?

    There was a time when one company in an industry owning most of the production and distribution would have set off alarms in the Justice Department and resulted in anti-trust action. That changed in 1980 with the election of Ronald Reagan. Word soon went out that the justice department would not be worrying about practices such as predatory pricing, and in fact, was really only worried about monopoly power if it resulted in higher prices to consumers, essentially meaning that the Justice Department was now mainly interested in price fixing in its anti-trust enforcement. It was a legal theory advanced by Robert Bork in a book titled The Antitrust Paradox.

    This radically changed the incentives for American businesses. Predatory pricing, a practice that got Safeway in trouble with the Justice Department in the 1960s, became a notorious tactic of WalMart. The key was not to use this power to raise prices, but to dominate its markets and use its market power to squeeze producers.

    Mindgeek is using a similar tactic. It is distributing the product for free on ad-supported sites, while squeezing porn production companies and performers to lower its costs. It routinely violates the intellectual property rights to sexual performances, but is so essential to production companies and porn performers for distribution that many say they can't speak out about the problem.

    So, why don't the production companies get together and refuse to sell to Mindgeek unless they get paid? Well, if they demand a given price for their goods, that would be price fixing, one of the few aspects of the anti-trust act that the government is still enforcing.

    Production of porn films is down 75 percent from the year before Mindgeek was founded. DVD sales of porn are down 50% over the same time span, because who wants to pay for porn they can watch for free if they tolerate some ads?

    Netflicks and Amazon are starting to produce their own content. We can expect more ethical behavior from them than we see from Mindgeek, but the incentives will be the same. We need to re-examine how our legislation regarding market power affects people selling their wares to distributors or working for them.

    The paradox referred to in Bork's book was that antitrust action to increase competition could increase, rather than decrease, prices. What he either failed to realize or didn't care about was that monompsony power, the market power of a dominant buyer, interferes with the business arrangements of people who contract to sell their wares or labor to that buyer. This represents a transfer of wealth from one group to another based on power rather than the workings of a free market just as much as price fixing does.

  • Are we prisoners of language or the authors of our lives?

    by John MacBeath Watkins

    The Sapir-Whorf hypothesis tells us that language, because it gives us the categories we use to think, affects how we perceive the world. Some researchers have gone so far as to propose that people who have different color lexicons actually see colors differently.

    Color me skeptical. I think it highly likely that the Sapir-Whorf hypothesis is correct on more culturally conditioned matters like our sense of fairness, but find it unlikely that it has much, if any, effect on how we see color, as opposed to how we talk about what we perceive.

    But this basic insight, which has really been with us since Ferdinand de Saussure's book,  A Course in General Linguistics, was published in 1913, gets at a deeper question. Are we prisoners of the languages that give our minds the categories we think with? Do we have individual agency, or are we prisoners of the structure of meaning?

    Is language a prison that restricts us, or a prism through which we see new things?

    Marxist political theory has insisted that the structure of meaning is a prison, that those who initiate us into it are enforcing capitalist cultural norms. Structuralist thinkers like Roland Barthes argued against what he called the cult of the author, and in general, structuralists argued against the relevance of human agency and the autonomous individual.

    Is this what language looks like?Structuralism has lost ground in its original field of linguistics. Noam Chomsky, for example, proposed that while structuralism was all right for describing phonology and morphology, it was inadequate for syntax. It could not explain the generation of the infinite variety of possible sentences or deal with the ambiguity of language.

    When Saussure developed structuralism, the previous movement in linguistics had been philology, which studied texts through their history, and the meanings of words as they have changed. This is a necessary process when examining classical texts, and philology has sort of calved off from the glacier of linguistics.

    Saussure proposed studying language synchronically, that is, at it exists at one time, which was perhaps a good corrective to the habits of his profession. But it did mean that the method was never intended to examine where the structure came from or how it changed. I doubt Saussure anticipated his method completely displacing the earlier methods of studying language. He simply felt is would be helpful to look at language as it exists, as well.

    As the understanding of the power of language spread, however, it did tend to obscure the role of the individual. Its proposal to study language as it is, rather than try to attach it to its past, fit with the modernist movement's desire to shed tradition and make the world new and rational, sweeping away the dust and sentiment of the centuries and plunging into the future. At the same time, the concept of the structure of language and thought was frightening. How could we leave the past behind when all we could think was already in the structure?

    Some tried to escape the structure of meaning, by making art that represented nothing, writing that tried to trick the brain into a space not already subsumed into the structure. But in the end, you cannot escape from meaning except into meaninglessness, and why do any work that is meaningless?

    We are not words in a dictionary that can never be revised. We define ourselves, in fact, we are the source of meaning. The web of meaning we call language would disappear if there were no minds to know it, no people to speak and hear. We learn by play, and it is through creative play that we expand the realm of meaning. A web without connections is just a tangle of fibers. We are the connections, and our relationships to each other are the fibers.

    Barthes was wrong. Authors are important, and authorship is pervasive. We are all the authors of our acts, writing the stories of our lives. Learning language and the other structures of society enable us to do this, to create new meanings, affirm or modify traditional meanings, and to influence others.

    We need not choose between being ourselves and being part of humanity, because we cannot help being both. Yes, we are in large part made up of those we've known, the books we've read, the traditions we've learned, but we are the vessels in which those things are stored and remade and passed on with our own essence included.





  • The Solow paradox, public goods, and the replicator economy.

    by John MacBeath Watkins

    Robert Solow, a Nobel-prize-winning economist, remarked way back in 1987 that "what everyone feels to have been a technological revolution...has been accompanied everywhere...by a slowdown in productivity growth."

    This has become known as the Solow paradox.

    The golden age of productivity growth in the U.S. was between 1939 and 2000, with a slowdown in the 1980s, an increase in the Clinton Administration, and a slowdown again since.

    What happened in 1939? Well, we began preparing for war. We didn't just build tanks, guns, ships, and aircraft, we also built roads and airports, and we dredge harbors and improved port facilities. Prior to World War II, flying boats were popular for serving areas that didn't have airports. After the war, there were plenty of airports.

    The infrastructure binge continued after the war, and Dwight Eisenhower thought his greatest accomplishment was the Interstate Highway Act, which knit the country together with ribbons of road. Eisenhower understood logistics. He also understood that training was important if you wished to mobilize a large enterprise, and he elevated education to a cabinet-level office.

    The federal investment in roads and education set loose the potential of the people and the land. And what have we done with this legacy of supply-side investment in public goods?

    We've disinvested.  Our public goods are getting old, and we've pushed onto students the cost of financing their education, so that someone can come out of college very easily in $100,000 debt. Higher education keeps getting cut while more is spent on other things, like prisons and welfare. Yet providing better education is one way we should be able to spend less on prisons and welfare.

    Our bridges are getting old, some of our roads are getting rough.

    But why didn't our technology give us the added productivity our disinvestment in public goods was taking away?

    Maybe it did. Or maybe, sometimes technology is not necessarily useful for increasing measured productivity.

    You measure productivity by seeing how many widgets are produced over a period of time by a given number of people. For example, in the cottage industry of music that existed before recorded music came along, you had to either make your own or hire a musician to make the music for you. Every song required a person making music to happen.

    When recorded music cam along, you no longer had to have a musician present to have a song. This meant fewer people would be employed as musicians, but also that people at the top of the profession could provide music for a larger number of people. A musician could sing a song once, and millions of people could buy that song and play it repeatedly. There was more music in our lives, it was made by the best musicians, and the cost was lower. Productivity increased.

    But we don't know how much, because we weren't calculating the productivity of musicians. A few musicians at the top were more productive, but once a record had been sold, it could be played many times. Those repeat performances were taken out of the economic sphere, and not counted as performances in any accounting sense. The metric became the sale of the record, rather than the performance of the song.

    But what happened with the digital revolution in music? Well, this:

    http://www.theatlantic.com/business/archive/2013/02/think-artists-dont-make-anything-off-music-sales-these-graphs-prove-you-wrong/273571/

    Unless there was a dramatic decrease in the number of musicians, this represents a huge decrease in productivity. Far fewer songs are being sold, and if the number of musicians remains constant, their productivity, measured by the usual economic methods, has decreased dramatically.

    But we know that this has not been accompanied by an increase in the cost of a song. What has happened instead is that much of the music produced has been taken out of the economic sphere altogether. People are pirating the songs, and getting music for free. There is a cost to this; it's not really as easy to steal a song as to buy it, but those who wish to sell a song are competing with the free copy that can be pirated by acquiring some skill and jettisoning some scruples.

    In the realm of classified ads, most of those are free on Craigslist. Until recently, most newspapers have made their digital product free. As a result, whole swaths of the economy have come out of the economic sphere. When you produce something for a lower price, you increase productivity. When you produce it for free, in economic terms you aren't producing anything.

    Thus, we have a different paradox, that of the replicator economy. On Star Trek, replicators can make anything you want for free. But if everything you need is free, how does anyone get paid? Musicians are already facing the replicator economy. Writers may face it soon.

    This shows that not all technology produces increases in economic productivity, because some of it takes things out of the economic sphere. So, what does increase productivity?

    Full employment. I know, I know, productivity actually climbs in a recession because you lay off your least productive workers, but in the long run, only a shortage of workers convinces companies to make capital investments to reduce the number of workers needed. If you have to bid up the price of workers to attract employees, it makes sense to increase productivity.

    Right now, we have the spectacle of cash-rich companies buying back their own stock, which is great for managers who have stock options, but not great for productivity.

    Disinvestment in infrastructure has been bad for productivity, and we could kill two birds with one stone by catching up on that, which would increase employment, and build improvements that would unleash some productivity. Investment in public capital goods could increase employment enough to stimulate investment in private capital goods.

    But what are the chances of that? We have an entire political party dedicated to the proposition that government spending can't produce jobs.Until we get better lawmakers, we won't have better policy.



  • Undead persons, born at the crossroads of law and money

    by John MacBeath Watkins

    We argue about what a person is, in terms of the biology of the individual, but what if we were to apply the same standards to those undead things we call persons, the corporations?

    The Citizens United decision determined that corporations are people for the purpose of free speech, in particular in spending money to influence political races. The Hobby Lobby decision granted corporations an exemption from a law because the corporation was considered to have religious views. And legislators in several states want to give a zygote the legal status of a person at the moment the sperm enters the egg.

    I think these legal maneuvers reflect confusion about what a person is. A corporation has long been a person in terms of being able to sign contracts. but they are composite beings, made up of many biological persons. It is difficult to imagine them as persons in the sense of having faith, when they are likely made up of people of differing faiths, or of being politically engaged as citizens when they are made up of citizens with differing views. It is difficult to imagine a zygote having faith or political views as well.

    This used to be a matter of religion, when philosophers argued about at what point a baby is ensouled. Aristotle argued that the baby did not have a soul until it laughed, which he said would happen about three months after birth. This allowed space for the Greek custom of exposing a child who was deformed, illegitimate, or otherwise found wanting, so that it died if it was not rescued by the gods or a passer-by. This possibility of rescue cleared the parents of the charge of murder.

    When I saw Abby Hoffman debate Jerry Rubin, he claimed his views on abortion were shaped by his religion:

    "The Jewish mother does not consider the fetus a person until it finishes graduate school," he joked.

    But he did have a sort of point. We may consider a newborn a person, but we don't allow it to sign a contract until it reaches its majority at 18 years of age. And yet, we allow newborn corporations to sign contracts and dodge taxes with the best of their human competitors.

    This is because the corporation is not a human person, it is a gestalt being made up of human persons who are of age to sign contracts. We think it is owned by shareholders, but as a person, it cannot be owned. Shareholders buy a right to some of the corporation's future earnings, just as gangsters used to buy a piece of a fighter hoping to gain part of any purse he won (then made sure of it by paying the other guy to go in the tank.)

    If you owned a piece of a fighter, you couldn't say, "I'm a bit peckish, cut off a leg for me and I'll eat it," because you can't own a person the way you can own a chicken. Nor can a shareholder demand the corporation sell off part of itself to buy out said shareholder. The shareholder must find a greater fool to buy the shares.

    But what is a human person? We certainly grant them greater rights for being human, and increase their rights as they become more mature in their judgement. In short, we regard them, as Abby Hoffman's mother did, as more of a person when they have more age and experience.

    One way to explore when a person begins is to ask, at what point does personhood end? In general, our medical experts agree that human life ends when brain activity ends. Why, then, would we consider a zygote, which has no brain, to be a person?

    While some who oppose abortion have claimed there is brain activity at 40 days, this does not seem to be the case. Certainly anyone with a heartbeat has some brain activity, but they would not be considered alive if they have no higher-level cognitive brain activity. One traditional notion was that the child was alive at its quickening. That would be when the mother first feels it kick, at about 16 or 17 weeks from conception.

    But many thinks kick and are not human. Brain activity that includes higher-level cognition happens at about 26-27 weeks. But that doesn't mean baby is ready to sign its first contract. Becoming human involves having a human brain, and while a baby is beginning to develop one at 6 months, it hasn't yet. More important, it hasn't yet been programmed.

    The real distinction between human and non-human life is the strange sort of virtual reality of the world of symbolic thought. This is part of the reason we delay responsibilities of citizenship such as being able to sign a contract or vote -- it takes a while to gain wisdom. Another reason is simple biology. Our brains mature and with changes in our brains, our judgement matures.

    All of this biology is lost in discussions of what sort of person a corporation is. When does brain activity begin in the corporation? Never. Servants of the corporation do the thinking. When does the life of the corporation end?

    The corporation cannot be killed by driving a wooden stake through its heart, like a vampire, or with a silver bullet. It can theoretically go on forever, never living, but undead, a creature born at the crossroads of law and money, able to corrupt its servants with rewards and punishments and make them do things they would never do as individuals. The corporation is never ensouled.

    A corporation can only die if certain words are inscribed on certain papers and placed in the hands of properly sanctified public servants, perhaps with a sacrifice of money.

    They are a locus of power that has its own logic, but not its own soul or conscience, or in any way its own mind. Sometimes their servants manage to gain control of them and use them to increase their own power and wealth while sucking strength out of the corporation, like a demon chained to serve a mage, who is in turn warped by the pull of the soulless thing they have exploited.

    Is it any wonder that corporations, these strange and powerful persons, continue to expand their reach and their power, even in the halls of law? They are like an alien hand in the market, a part of the body politic that can act in ways we don't associate with ourselves.

    And yet, our Supreme Court has ruled that these undead things are persons who act as citizens, with the same rights of free speech as someone with a mind, and the same rights of religious conscience as someone with a conscience. The alien hand has extended its reach, and gripped our most precious institutions.

    Can we find the words to limit their reach, or the make the sacred documents that can confine them? Or can we find a way to ensoul them, so that they will be worthy of the responsibilities the court has thrust upon them?




  • Don't let your babies grow up to be booksellers

    Mamas, don't let your babies
    (to the tune of Mamas, don't let your babies grow up to be cowboys, with apologies to the late Waylon Jennings.)



    by John MacBeath Watkins

    Booksellers ain't easy to love and they're harder to hold.
    They'd rather give you a book than diamonds or gold.
    thick glasses and old faded Levis,
    And each book begins a new day.
    If you don't understand him, an' he don't die young,
    He'll prob'ly just get fat and turn gray.

    Mamas, don't let your babies grow up to be booksellers.
    Don't let 'em quote Dickens or drive them old trucks.
    Let 'em be doctors and lawyers and such.
    'Cos they'll never leave home and they'll recite obscure poems.
    Even to someone they love.

    Booksellers like reference rooms and gray rainy mornings,
    Not little puppies and children and girls on the stairs.
    Them that don't know him won't like him and them that do,
    Sometimes won't know how to take him.
    He ain't wrong, he's just different but his obliviousness won't let him,
    Do things to make you think that he cares.

    Mamas, don't let your babies grow up to be booksellers.
    Don't let 'em quote Dickens or drive them old trucks.
    Let 'em be doctors and lawyers and such.
    Mamas don't let your babies grow up to be booksellers.
    'Cos they'll never leave home and they'll recite obscure poems.
    Even to someone they love.

  • A friend to entropy and an anarchist at heart

    by John MacBeath Watkins

    S. was a tall woman, in her private life a sort of den mother for anarchists with whom she shared a house. Some time after she started working for me, she began dating a cousin of mine who I'd never previously met, and eventually she married him.

    So, I suppose whatever forces shape our fate must have Intended that she be part of my cohort. I thought of her recently, when I asked my business partner where something was.

    "Why do men always ask women where things are?" she replied.

    That was an easy one.

    "Because you move them."

    She had, in fact, tidied away the object in question, and knew exactly where it was in precisely the way I did not. And that is one of the many great things about Jamie. She generally knows where she puts things.

    Not so with S. And this was a problem, because of the way I tend to organize things.

    If I want to be able to find something, I do the obvious thing: I leave it out in plain sight. This tends to lead to a bit of clutter, with the most often-used items on top.

    S. wanted a neat work environment. To her, this meant less clutter. The way she achieved less clutter was in the obvious way: She put things out of view. Unfortunately, once things were out of view, she seemed to think the problem was solved, and actually finding the object next time it was needed was not a high priority for her unless it was something she used.

    I came to view this in terms of entropy. Entropy isn't just a good idea, it's the law, and it clearly states that the universe is going from a higher state of organization to a lower state of organization.

    My system of organization acknowledges this. My environment is in a state of apparently increasing disorder, and yet, for the most part, I can find things. The system S. used involved the expenditure of energy, which is entropy itself, to bring the environment to a state of greater disorder, in which information about where things were was destroyed, which is entropy again.

    Now, it is possible for a system of putting things out of sight to preserve this information, even for it to preserve information better than my somewhat sedimentary system of piles. You would, for example, put stuff under "S" for "stuff," and other stuff under "O" for "other stuff."

    This was not the method S. employed. Her method was to expend energy to destroy information, and I cannot help but think that on some level, she did so as a friend to entropy, an anarchist at heart.



  • The Self-conscious mythology of literature (The Strangeness of being human, cont'd)

    by John MacBeath Watkins

    There was an age of myth, when we explained the world to each other by telling stories about the gods. There was an age of fable, when we explained morality to each other by telling folk stories that belonged to the culture.

    And there is the age of literature, when we know who wrote the story, and make it their property.

    In the age of myth, we told each other stories that were supposed to be true, and didn't know where they came from. During the age of fable we understood them as parables. In our age of literature, we understand them as personal insight.

    We regard all as contributing to our understanding of the nature of human nature, but by stages, they have become more tenuously connected with socially constructed truth, and more subject to our self-conscious understanding. We ask ourselves, is this a story we can accept as telling a truth about humanity, or do we reject it? Rejecting the myths was not optional during the time those religions were active. People lived in societies where the truth of the history of the gods was too socially accepted.

    To reject the story of a fable, we would have to say that we disagree with the culture, not with the gods. To disagree with an author, we have only to disagree with one individual. The judgments of the author and the reader are those of individuals, with the social acceptance mediated by markets -- which books people talk about, and buy, or feel left out because they haven't read.

    We have other ways of understanding human nature, such as the more rigorous storytelling of science, the unreliable narrators of our families and friends explaining themselves as best they understand themselves, or the frantic efforts of our news sources trying to attract our attention to fragments or figments of information or gossip they think we might like to know.

    But it is literature which works the most like mythology, transporting us into stories and allowing us to experience things that have not happened in our own lives. It instructs us or subverts us in ways mere facts do not, influencing the emotional armature on which we hang our facts and shape them into our beliefs.

    As our culture has changed, we've become more self-conscious of the process. We may choose to judge a book by its author. We might decide that if Ayn Rand could live off Social Security in her old age, perhaps the philosophy she pushed, which would claim only the morally inferior "takers" would need a safety net, was not even something she could live by.

    Or we may say to ourselves, "J.D. Salinger seems so deep when I was so shallow, such a sallow youth, but now that I'm in the working world I have put aside that juvenile cynicism and taken up the more useful and manipulative cynicism of Dale Carnegie."

    The ability to do this makes our emotional structure more malleable than we would be if the stories we based our lives on were eternal verities handed to us by the gods, as if the clay of our feet never hardens. This gives us an adaptability our ancestors never knew or needed, but what is the cost? Do we become chameleons, taking on the coloration of our social surroundings to better camouflage our true selves, or do we change our true selves at a pace never before seen in human history?

    I suspect the latter. We are bombarded with stories, on television, in games, in books, even, for the dwindling few, in magazines. We grow by accepting them into ourselves, or set boundaries by rejecting them, and we are constantly reshaped, little by little, meme by meme.

  • On the persistence of print and absorbing information (publishing in the twilight of the printed word and the strangeness of being human)

    by John MacBeath Watkins

    I've been reading The Shallows, a 2011 book by Nicholas Carr about how the internet is rewiring our brains, and in the midst of this alarmist text on how much shallower we shall become because of the internet, I've found a cause for hope.

    You see, the PewResearch Internet Project has found that younger people are more keenly aware of the limitations of the internet then their elders.

    I am not a digital native. You might call me an internet immigrant, or even a digital alien. I've come to use the internet quite a lot, but I'm keenly aware that much of what we know isn't there. It's in books, or in peoples' heads.

    But on this issue, as on so many others, I find that young folks today are in better agreement with me than my own generational cohort. From the report:

    Despite their embrace of technology, 62% of Americans under age 30 agree there is "a lot of useful, important information that is not available on the internet," compared with 53% of older Americans who believe that. At the same time, 79% of Millennials believe that people without internet access are at a real disadvantage.I think that's a very realistic assessment. The internet makes it easy to find the information on it, but there's a lot that just isn't there.

    And there is also the issue of what you want to read on the screen. In At Random, Bennett Cerf's memoir of his life in the publishing business, he noted that prior to the introduction of television, fiction outsold non-fiction about three to one. After its introduction and subsequent ubiquity, that reversed.

    But when I looked at Amazon's list of top-selling e-books recently, there wasn't a non-fiction book in the top 40. The Barnes & Noble list of the top-selling hardcover and paperback books shows five of the top 10 being non-fiction.

    It would appear that peoples' reading habits are adapting to the reality of reading things on a screen which can also be used to go on the internet and buy things or roam around the infosphere. It is Carr's contention that silent reading, which invites us into the private contemplation of the information and thinking of the author better than the public performance of reading aloud, has been with us for about a thousand years. Printed books invited us into this quite, private world, while reading on a device connected to the internet invites constant interruption. A text littered with links, GIFs, and videos invites cursory and distracted reading.

    But stories were performed by a storyteller or a cast of actors long before silent reading came about. We can immerse ourselves in stories without thinking deeply, let them wash over us and sweep us away without trying to interpret or challenge their thinking. That seems to be the sort of thing we are willing to read on the screen, partly because the experience of being transported into the story makes lower demands on our intellect.

    It seems odd that the newest technology is best for the sort of mythopoetic storytelling where we don't consciously absorb information, while books that demand our use of instrumental logic are best read on paper. Mr. Carr has himself noted that while e-books are now about a third of the market for new books, they are only about 12 percent of the sales of his own, somewhat intellectually demanding books.

    Perhaps this is only a pause in the twilight of the printed word, until e-publishers work out the interface a little better. But I found when I was reading A Course in General Linguistics on line, I wasn't getting as much out of it as I did when I got a paper copy. The text was not interspersed with links, and the copy I got in book form had the distraction of marginalia, but I found it easier to immerse myself in a text I needed to read critically and contemplatively when it lay before me on paper.

    Now, you might think that the young, more adapted to reading on the screen, would simply read more of the sort of short, punchy stories about who was showing side boob at the Oscars and watching cat videos and porn, but according to the Pew study, they are more likely to have read a book in the past year.

    Some 43% (of millenials) report reading a book--in any format--on a daily basis, a rate similar to older adults. Overall, 88% of Americans under 30 read a book in the past year, compared with 79% of those age 30 and older. Young adults have caught up to those in their thirties and forties in e-reading, with 37% of adults ages 18-29 reporting that they have read an e-book in the past year.Interestingly, e-books seem to have caught on with older adults first, perhaps because you can adjust the type size in an e-book.

    But for now, e-books seem to have unexpectedly plateaued, and printed books persist.

  • Religion as an interface: The Strangeness of being human cont'd.

    by John MacBeath Watkins

    One of the most popular posts on this blog explores the roots of religion, and the need we have for a mythopoetic understanding of the world. Scot Adams, blogger and cartoonist of the Dilbert strip, says that religion is not a bad interface with reality.

    And it strikes me that as we've made our machines more compatible with us, we've made them more artistic and poetic. I do not speak machine language, but I am able to communicate with my computer through my simple faith that when I reverently click an icon, the file will open.

    On rare occasions, I have to use the command line to communicate in a more concrete way with my computer, and sometimes I even have to open the back and stick in more memory. But I don't really understand the machine in the way my nephew Atom Ray Powers, a network administrator, does, nor do I understand the software the way his brother, Jeremy, a programmer does. And neither has studied assembler code, which my uncle Paul learned after he was injured out of the woods as a logger.

    It's as if we are replicating the way people perceive the world. The graphical user interface gives us a visual, metaphorical understanding of how to face the reality of the computer, just as religion gave us a metaphorical, poetic, and often visual way of interacting with the reality of the world. The command line gives us greater control of the computer, just as technology gives us the control of nature.  Science attempts to learn how the world really works, at deeper and deeper levels, similar to knowing how the transistors work and how to read machine language..

    The fact that computer scientists, who started at the scientific end of things, felt a need to make the interface more metaphorical and even artistic tells us something about how humanity interacts with the world. The intuitive approximation is vital if we are not to be overwhelmed with detail. It is sometimes said that ontogeny recapitulates phylogeny, because every fetus goes through phases of looking like a primitive fish, then a salamander, and eventually takes on human form. It would appear that the same thing happens cognitively.

    Those of us, like myself, who follow the methods of the metaphorical interface in our daily lives often seek guidance from computer gurus. And those gurus, when they are not repairing malfunctioning machines or recalcitrant code, operate their computers in the symbolic realm made possible by the GUI.

    We seem to have some difficulty doing this in our world of faith and science. This is usually because each side insists that its way of understanding the world is truth, therefore the other cannot be truth. But a model of an atom isn't what an atom really looks like, because an atom is smaller than a visible light wave. All of our understanding is metaphor and artistic license at some level. In my view, we have understandings at different levels.

    Now, perhaps I've offended some religious people by saying religion is metaphor. But all sacred texts were written to be understood by people, not by gods. All of our understanding is metaphor. "For now we see through a glass, darkly" a biblical passage says. We understand the world by telling stories about it, and deciding which best describe it. Sometimes, as with math, the stories can be very precise, and the grammar quite rigorous, but they are stories none the less.



Questions, comments, or suggestions
Please write to info@TwiceSoldTales.info
Copyright©2014. All Rights Reserved.
Powered by ChrisLands.com

 

 

cookie