• You can hop over to my Substack to see my thoughts on this one!

  • In short, current-gen digital assistants have a social IQ of 0. They’re increasingly aware of every detail of our digital and physical lives, able to understand what we’re doing and provide contextual help and relevant information without prompting — but they’re completely blind as to the social context in which our activities occur. They’ll eagerly offer up the most private details of your life to anyone who manages to access your device(s), legitimately or otherwise.

    Sometimes the result is funny-tragic, like Windows 10 helpfully turning someone’s porn stash into a screensaver, but it’s not difficult to imagine scenarios where it could be dangerous, like turning scans of passports or personal documents into a screensaver, or exposing your contact list to a competitor or stalker. In general, it’s not safe at the moment to hand access to your computer or phone to anyone you don’t completely trust, because your device’s concept of “you” doesn’t really exist. There’s just “what’s typically done on this device” and current-gen digital assistants will helpfully remind and search and suggest based on what’s typically done, even if the results could be embarrassing, awkward, or dangerous.

    Since we’re getting to the point where digital assistants are viable thanks to machine learning revolutions, we seriously need to start thinking about the social contexts. Our devices are increasingly full of sensors that can detect bits and pieces of the real world, but there’s little effort right now to build software that incorporates that information into a social context — to build software that can:

    • understand the difference between an audience (physically or virtually) of “me,” “me and my partner,” “me and my friends,” “me and my colleagues”, “someone who isn’t me”, etc., and
    • determine whether a piece of information (documents, applications, history, etc.) is appropriate for that audience.

    Until operating systems, applications, and assistants are smart enough to understand that they shouldn’t display personal contact lists when screensharing on Skype, or that they shouldn’t passively include stashed porn in screensavers, or that they should hide banking information or travel schedules if a stranger enters the room while it’s visible on the screen, or a thousand other scenarios that require some sense of social context — they’re potentially dangerous.

    Right now, these primitive AIs all operate as if every device is your private, personal device which is never accessible or visible to anyone else, and until they gain some degree of social intelligence, I won’t be using them. I’m cautiously optimistic that we’ll be able to resolve these issues in the next decade or two and I’ll happily jump on board, but it will require paying attention to the fact that it’s an issue to begin with — one that, to date, has had very little high-profile discussion and needs much more.

  • Back in 2011, I wrote this post on the state of the global economy. Five years have passed, so a quick update on that for 2016:

    Global population 2016 (just Google it!): 7.4 billion

    Global Employment (And Sector Breakdown)
    Workforce: 3,273M people (3.3 billion)
    916M / 28.0% in agriculture
    718M / 21.9% in industry
    1639M / 50.1% in services

    Unemployment and Job Security
    ~200M unemployed
    1.5 billion / 46% of the global workforce are considered “vulnerable” — non-salary self-employed (small businesses, contract workers, migrant farm workers, gig economy workers, etc.) or working for little / no pay for other family members (farms, businesses, etc.)

    Manual Labor Increasingly Devalued
    1 farmer with modern technology in the US can feed ~155 people. If we enabled farmers globally to operate at the same levels of efficiency, we’d need roughly 48M farmers (5.25% of current number) to feed the world — this is without major inroads from emerging robotic / AI technologies into agriculture. (Current-gen precision agriculture isn’t the same as autonomous agriculture, which is where future gains are to be made). In other words, about 95% of the people employed in this sector are ultimately replaceable with current technology, and that number is likely to increase with better automated technology.

    Manufacturing employment has been decreasing by ~0.4% / year, down to 11.5% in 2014 (~360M). Full automation with AI is unlikely to make human involvement more necessary.

    New Jobs in New Industries Can’t Keep Up
    Less than 0.5% of jobs in the US in 2016 are in “new industries” (ones that didn’t exist in 2000, including renewable energy and biotech) enabled by technology.

    Based on the above 0.5% figure, in the US, we’re shifting about 0.03% of workers a year over to new industries, while as many as 47% of jobs in the US could be lost to automation by 2035 (see this report). This is not remotely fast enough; losing 47% of jobs and replacing 0.03% / year, it would take roughly 1,566 years to simply catch up with loss. This also doesn’t account for the need to add another 20-30M jobs overall to the US economy to compensate for labor force growth due to population growth between now and 2035 (source). This also questions the “people just need to retrain” response put forward even by some experts. Retraining is useless if there are no jobs to be had for retrained workers, presuming they can retrain quickly enough to keep up with changing technology in the first place (unlikely).

    A Note About That “Gig Economy” Work
    The so-called “gig economy” is not a solution, and might actually be dangerous. People working contract gigs (rather than being salaried with benefits) are pretty much the very definition of vulnerable workers per ILO. These jobs typically have no long-term reliability and no benefits, particularly health care or pension plans. It’s not surprising that countries with high ratios of self-employed to full-time include countries known for corruption and instability such as Mali, Niger, Liberia, South Sudan, and Sierra Leone – full discussion here.

    Surprising no one, I was unable to find decent statistics about income distribution on common sites like Upwork or Fiverr, but it’s likely to resemble the app store profit curve, where almost nobody outside the top few percent of earners are making enough to survive in developed economies.

    Conclusion
    Still no indication that there are ever going to be enough jobs again. In the US, neither major candidate for US President in 2016 is acknowledging the degree to which this is a problem; Trump will blame the unemployed for their own predicament and Hillary is too out-of-touch to understand how difficult things are becoming for the “average” American. A Hillary victory seems likely to make the 2020 elections even more friendly to a populist demagogue candidate due to a build-up of frustration and desperation. Trump will embolden people to take their frustrations out on each other, particularly if the “other” isn’t a flag-waving white Christian.

    Needless to say, we need to treat this more seriously than it’s currently being treated. Per my last post on this topic:

    “Liberals” think that “conservatives” are keeping them poor. “Conservatives” believe liberals are lazy hippies who just need to get a job. Communists say that communism will save the world, and the Ayn Rand fans wave the banner of individualism and capitalism as some sort of cure-all. All of them, I think, are wrong.

    The debate must be reframed in terms of a world where we no longer need everyone to work and there will never again be enough jobs.^ What sort of world do we want that to be? While I believe that not everyone needs to work, I believe most people are happier if they are being productive members of society in some way, so we need to refashion our economic realities to provide them an opportunity to do so and to have a basic standard of living. I don’t know what this world will look like. I don’t know if we will ever get there. I don’t believe capitalism is the answer, nor do I believe that communism is. Humans have a competitive instinct that must have an outlet, but must also be channeled for the greater good.

    Out of time, but I’ll follow this up at a later date with thoughts on possible approaches to change. In the meantime, the links above should provide food for thought.

  • So for the past few weeks, I’ve been working on a little project that is basically Imgur for picture-based polls. Head over to picpix.co and click the “make a poll” button to make your own, or browse public polls for fun.

    picpix_sample

  • I have been following Google Glass for a while, and am pretty excited about the technology and the possibility of someday getting my hands on one both as a user and a developer. In the meantime, I’m following the back-and-forth between fans of the tech and people who believe it’s the Apocalypse of Personal Privacy, and some of the reactions are really starting to irk me (see “Stop The Cyborgs” and “Seattle Bar Bans Google Glass” for examples of this). I published my feelings on the matter as a late comment to a Slashdot article (“Should We Be Afraid of Google Glass”), but since I keep seeing more of these ignorant, emotional reactions popping up, I’ll repost it here:


    This kneejerk fear that you are “being recorded” in public places is irrational and stupid, and only a matter of decades away from being shoved in your face by advances in technology that you are probably not aware of (see Brain Movies for something thought-provoking). We forget or dismiss that we already are recorded, in a manner of speaking, by the human eye and the human brain whenever anyone else sees us, which is pretty much analogous to cameras and digital memory and is exactly what Glass does. I already refrain from acting in ways I don’t want to be remembered by other people when I’m around people (or think I might be around people), and in my opinion this is no different. Personally I hate the idea of stationary hidden surveillance cameras or drones with cameras far more than I’m bothered by the notion that someone who looks at me can remember me tangibly or mentally, since in the long run I have no assurance that someone who’s seen me can’t someday have their brain imaged while remembering what they saw, and with hidden stationary cameras or drones I simply have no way of knowing that I’ve been seen in the first place.

    I realize people will argue that memory is more fallible (then again, digital imagery can be manipulated) and currently can’t be shared with other people (see prior paragraph) and somehow that’s more comforting, but we will end up facing this issue as a species one way or another and as a result, Glass doesn’t bother me in the least. If you don’t want to be recorded, then disguise yourself or stay away from people you don’t completely trust, because laws and feelings ultimately cannot — and never could — prevent people from remembering you or surreptitiously recording your image in the first place.

  • If you’re trying to do some URL-rewriting with JBoss’s org.jboss.web.rewrite.RewriteValve and finding that the “-f” options don’t work correctly on RewriteCond, it may be because the damned code hasn’t been finished on JBoss’s side, despite what the RewriteValve documentation implies. I dug up the RewriteCond source on GrepCode and every version I looked at (2.1.0.GA all the way through the version 3 alphas) simply returns “true” for a ResourceCondition test, whether or not your file actually exists, instead of running through the TomcatResolver to check.

    Very, very broken. Hopefully this helps someone not waste hours on it like I did — it’s bad enough that it’s next-to-impossible to get useful logging going, but finding out the code was never implemented after all of that effort induces hair-pulling. For my personal project, I was trying to serve up a “default” theme for a customizable user interface if a more specific skin hadn’t been specified, but I can’t test for the existence of a more specific skin within JBoss, so I guess I’ll have to install something like Apache that actually works and serve up the skins externally. Ah well.

  • Short post:) I’m a language geek and I’ve been re-watching the mytharc episodes from X-Files. I noticed near the end of 4×09 (“Terma”), Peskow says something in Russian when he sneaks up behind Scully that the unofficial transcript (there are no subtitles) hears as “sur posidive” and they translate as “at last”. I speak enough Russian to know that “sur posidive” isn’t Russian at all, and I heard the first word as все (vse) meaning “all”. So I listened, looked things up, and consulted a Russian friend. The verdict? He probably says “все позади” (vse pozadi), meaning “it’s all over.” Minor differences, but in the interests of accuracy…:)

  • I spent a great part of the last 24 hours trying to chase down a couple of memory leaks in a javascript project I’m working on. After much hair-pulling, a couple of observations (and no more memory leaks) have resulted:

    1. jQuery’s element creation guides leave a lot to be desired. On their official site, you can read the following example under “Creating New Elements”:

    $('<p id="test">My <em>new</em> text</p>').appendTo('body');

    Later, they discuss creating elements as follows:

    $('<input />', {
    type: 'text',
    name: 'test'
    }).appendTo("body");

    I’m only targeting WebKit and Mozilla browsers for this project (I have the luxury of doing so), so I’m not concerned with IE quirks. What does concern me is that creating elements as in the first example causes memory leaks if you do it a few million times (for example, updating some page element to reflect incoming realtime data). If you put a string of complex HTML into $(), it seems like jQuery is doing something to cache the parsed fragment that does NOT get erased even if you call .html(”), .empty(), or .remove() on its parent. The element is merely removed from rendering. Elements created the second way will be fully removed from memory instead of placed in some #document-fragment or $.fragments cache (this stackoverflow discussion seems to be very similar to the problem I experienced). So even though the second syntax is far less wieldy for making complex little HTML updates, it doesn’t leak.

    2. jQuery Sparklines is a nice little jQuery plugin to allow you to make sparklines out of some data on your page. Data visualization is fun and everyone likes it, but I was trying to troubleshoot the memory leak above and even after I fixed that, I was still observing memory leaks related to Sparklines. Sparklines is sort of indirectly to blame. It keeps a “pending[]” array that links to your page elements and stores any sparklines that aren’t visible until you call $.sparkline_display_visible() to render anything in the pending[] array. This is nice for static pages, but it can have the undesirable side effect of stacking up millions up sparklines (in itself a sort of memory leak) on dynamic pages by the time someone gets around to clicking the tab, even if those sparklines were attached to elements that have been removed from the DOM via .remove(), .empty(), or html(”) — the latter cases, of course, are effectively memory leaks since references are hanging around in that “pending[]” array. The easy fix is just to never request a sparkline for a hidden element in the first place, but it still feels clunky to me. It would have saved me some time if this implication of Sparklines tracking requests for hidden elements was explicitly documented.

    (My use-case is replacing cells in a table based on real-time updates via WebSockets; some of these updates are used to generate sparklines that go stale on the next update for a given category, so if they haven’t been observed, they should be discarded.)

    Yay for memory leak troubleshooting – pretty much my least favorite part of coding ever.:)

  • BioWare - Mass Effect 3 Promo Art

    What fascinates me about the ending of ME3 isn’t so much the fan rage (the ending was a blatant rip-off of the Matrix, Deus Ex, Battlestar Galactica, god knows what else) as the fact that reviewers are so intent on pretending that the ending was some sort of literary or artistic triumph and that the fans are just ignorant philistines who should go back to school and learn to appreciate “real” art.

    There are two major problems I have with this critical reaction. First, I’ve long been annoyed by a certain streak of “anti-fun” in modern intellectualism as it regards art in general. It’s like an ascetic Protestant streak, some notion that if you’re actually enjoying something it can’t be art, because real art is painful and difficult and anything else is pedestrian junk food for people with low IQs. So a large amount of the reviews of ME3 take the approach that because the end of the game involves suffering and sacrifice and inevitability (and really, not much fun at all) that it is “higher” art than if it had a heroic, happy ending. “It’s more true to life,” they claim, and so it’s more valuable, more “authentic.” This is the same to me as rejecting anything other than photorealism in painting, because it’s more “authentic” (although you might as well just take a photograph and save the effort instead of trying for a tromp l’oeil effect). Happiness, joy, and beauty are not invalid topics in art. Trying to create a world just so you can destroy everything good in it doesn’t make you “edgy” or give you extra intellectual brownie points — it just makes you a dick. (Sorry, G.R.R. Martin / Game of Thrones fans.)

    The second thing that really bothers me is that most games are not static forms of art, and especially not role-playing games. The draw of an RPG has always been to customize your character and interact with the world and make it your own, then play again with a different character and experience it through another set of eyes. The reviewers who laud the designers’ choices for their supposed artistic integrity and “refusal to bow to the least common denominator” while bashing the fans as “failing to get it” or “trying to steal control from the artists” are entirely missing the point. ME3 isn’t a movie or a book that requests you to sit back and experience a sequence of events that were laid out in advance by a creator or team, with no input. It promises to allow you to influence the story, to come out at divergent points with different characters and to explore possibilities as much as narrative. By presenting essentially a single ending (the three are not sufficiently different to require multiple playthroughs to experience) over which you have no real control, it’s almost as if a media switch has been pulled underneath you as the player. You’re watching a movie, and then suddenly you’re…reading a book. Or you’re listening to music, and suddenly someone is reading out the notation – “A-flat above middle C, F above middle C, B-flat below middle C.” Or you’re watching a play, and suddenly everyone stops moving and you’re looking at a still-life that never resumes. The point is, any of these experiences would be jarring for someone expecting a certain experience.

    I realize that this type of dissonance is a valid subject for art to explore, but I think that it would be absurd to think that it would go over well with mainstream audiences who have purchased the work with specific expectations — ones that were encouraged and advertised by the company selling it. It’s not even that it’s invalid in any “moral” sense for EA/BioWare to do this, just that it’s bad business sense because it’s effectively a type of bait-and-switch where people were purchasing entertainment and got complicated “intellectualism”^ instead. If you burn your customers, they won’t come back.

    This isn’t the first time we’ve seen this, of course. The Matrix trilogy comes to mind, and I think it’s not surprising that the first movie is by far considered the best. Critics will try to say that fans didn’t react so badly to that ending (with the Architect nonsense), but they also had less investment. As a movie viewer, you go into a theater knowing full well that you might not get an ending that you like. As a gamer playing a genre RPG, you have certain expectations that at least one ending will probably be something you can jump on board. Mass Effect 3 gave us nothing but an exercise in artistic dissonance that will please people who want to feel like they’ve achieved something noble by suffering through yet another round of artistic self-flagellation. For everyone else, it’s just a let-down.

    ^Edit: I should note that I’m not against making people think. Thinking is good for you. Just don’t expect people to like it when they show up to your techno thriller film and you slap them in the face and tell them to RTFM because they don’t know how to compile a kernel. 😉

  • On the weekend, I got a request to post an infographic on my blog related to the “ethics of the wealthy”, supposedly on the strength of one of my recent OWS-related posts:

    Hi Nathaniel,

    In searching for blogposts that have used or referenced OccupyWallst.org, I stumbled across your site and wanted to reach out to see if you were interested in sharing a graphic with your readers. It
    illustrates studies found on how those socially and financially well-off behave unethically compared to the lower ladder.

    Would love to connect, if you’re interested. Thank you!


    Tony Shin
    @ohtinytony

    I replied back that I’d be interested in looking at the graphic, and a day or so later got a response. The graphic was bothering me, because it contained hardcoded references back to “accountingdegreeonlineDOTnet” (butchered because I don’t want to give them a link accidentally). The site is featureless, with no useful information about the people behind it and its WHOIS information firewalled behind a privacy shield so I emailed “Tony” back requesting more information about the site. I got no response from “Tony,” but the next day a second email address on my site got the same original request from “Tony” with the same wording.

    I use “Tony” in quotes, because I did a quick bit of googling tonight and found posts by a guy named Mark Turner on the Mystery of the Infographics. Seems he got the same sort of spam I did, including one from “Tony,” but his offers are related to PIPA/SOPA and TSA topics. Both of us seem to have checked out “Tony’s” Twitter, which doesn’t link back to these sites he’s promoting, so something is certainly fishy even before he’s spamming multiple accounts on my site with the same infographic request.

    I’m not entirely sure what the purpose of this is – if it’s SEO spam, or what. That’s the best guess I have, but whatever it is, it’s annoying. I won’t be posting that particular infographic, but if you are interested in the ethics of the wealthy, this recent Berkeley study will be of interest to you. There is some sort of irony here, as well, because whoever’s behind this seems at best guess to be wealth-seeking and behaving pretty unethically.:P Figures.