• 0 Posts
  • 49 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle
rss





  • In the very first real programmer job that had, back in 1986, the IT department estimated that they had a 51 man-year backlog of development work. That would have translated to two or three calendar years of work. Probably more, considering how crappy estimates always are, and the always under-estimate.

    It turns out that this is pretty much the industry standard. Virtually ever place I worked for the next 35 years had a similar size of backlog. And that backlog isn’t standing still, either. All you can hope is that 3 more years worth of new requests don’t come in during the two years it takes to complete what you already have.

    Some of those new things are going to have a higher priority than stuff that’s already in the backlog. The reality is that some item that’s down at the low end of the list is going to get bumped down, again and again, and never get done. Or it’s going to someday become an urgent priority that can’t wait any more.

    So the pressure is always intense for the developers to go faster, faster, faster. And the business doesn’t understand or care about good engineering practices, even though the shit hits the fan when a critical bug gets released to production. And God help you if that backlog of 51 man-years has grown to 70 after a year because of the technical debt you introduced trying to go faster.

    The fight to rein in scope is constant. At that first job, the head of the department told us, “to build Volkswagens, not Cadillacs”. It was laughable, because they were struggling to keep up while building Skodas.

    You can’t just add more programmers because the productive backbone of the development team is a group of programmers that have all been there for at least 5 years and they are domain experts. It’s going take at least 5 years to bring new hires up to that level of knowledge.

    And that’s all three sides of the project triangle: scope/quality, resources and time. You can’t meaningfully add resources, scope’s already stripped down to bare bones and the time is too long.

    And the truth is that every one of those projects in that 51 man-years backlog is important, even critical, to some aspect of the business. But the development process is unfathomable to muggles, so can’t you just go faster? Can’t you wring a bit more productivity out of those domain experts?










  • Maybe…but two things:

    If the number of obese people is lower, then what are the people who aren’t mildly overweight? They are healthy weight. So even if the percentage of mildly overweight people stay the same, the day to day comparison is with a bigger group of healthy weight people, so they probably were more recognizably overweight.

    Secondly, with less really obese people you wouldn’t get desensitized to seeing fat all the time, which makes mildly overweight people seem more normal. Somebody with a BMI of 26 and about 15lbs overweight would have been more likely to be described as “plump” or “husky” back then. But when crowds are full of people that are 50+ lbs overweight, that 26 BMI seems downright healthy.

    This is all speculation. I can’t remember how I perceived overweight vs obese people back in the 80’s.



  • This is true, but…

    Moore’s Law can be thought of as an observation about the exponential growth of technology power per $ over time. So yeah, not Moore’s Law, but something like it that ordinary people can see evolving right in front of their eyes.

    So a $40 Raspberry Pi today runs benchmarks 4.76 times faster than a multimillion dollar Cray supercomputer from 1978. Is that Moore’s Law? No, but the bang/$ curve probably looks similar to it over those 30 years.

    You can see a similar curve when you look at data transmission speed and volume per $ over the same time span.

    And then for storage. Going from 5 1/4" floppy disks, or effing cassette drives, back on the earliest home computers. Or the round tapes we used to cart around when I started working in the 80’s which had a capacity of around 64KB. To micro SD cards with multi-terabyte capacity today.

    Same curve.

    Does anybody care whether the storage is a tape, or a platter, or 8 platters, or circuitry? Not for this purpose.

    The implication of, “That’s not Moore’s Law”, is that the observation isn’t valid. Which is BS. Everyone understands that that the true wonderment is how your Bang/$ goes up exponentially over time.

    Even if you’re technical you have to understand that this factor drives the applications.

    Why aren’t we all still walking around with Sony Walkmans? Because small, cheap hard drives enabled the iPod. Why aren’t we all still walking around with iPods? Because cheap data volume and speed enabled streaming services.

    While none of this involves counting transistors per inch on a chip, it’s actually more important/interesting than Moore’s Law. Because it speaks to how to the power of the technology available for everyday uses is exploding over time.




  • Back in the 70’s and 80’s there were “Travesty Generators”. You pushed some text into them and they developed linguistic rules based on probabilities determined by the text. Then you could have them generate brand new text randomly created by applying the linguistic rules developed from the source text.

    Surprisingly, they would generate “brand new” words that weren’t in the original text, but were real words. And the output matched stylistically to the input text. So you put in Shakespeare and you got out something that sounded like Shakespeare. You get the idea.

    I built one and tried running some TS Eliot through it, because stuff is, IMHO, close to gibberish to begin with. The results were disappointing. Basically because it couldn’t get any more gibberishy that the source.

    I strongly suspect that the same would happen with Trump’s gibberish. There used to be a bunch of Travesty Generators online, and you could probably try one out to see.