Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

July 27 2016

bbd
16:05
Dafür hab ich noch nie von Polarkreis 18 gehört %-)

July 23 2016

bbd
15:11

July 21 2016

bbd
14:22
I miss Moebius :'(

July 16 2016

bbd
14:10
Well, a dynamically linked app, I can ask what it is linked against, trivially. A static binary is opaque. And "test for the behavior" is not always sanely possible, especially with security bugs. Just replacing the shared lib (and rebuild where necessary, this is trivial to find) is much safer.

Replacing a library with an incompatible one can be handled by the package manager, until the packages that are version-dependent have been updated, you keep both versions of the library (in Gentoo, this is done via @preserved-rebuild).

Making every application hermetic doesn't work: things tend to talk to each other, sometimes in very subtle ways like dlopen() (which is like linking, but can happen at any time during a program run, one example for this are media plugins and the like).

This hermetic approach can be done for very tightly controlled setups, like @schlingel mentioned. But I don't think it will ever work sanely for a desktop computer/workstation.

July 13 2016

bbd
07:31
I think the idea of "apps+libs will update faster" is overly optimistic. Just like bundling does not lead to patches contributed upstream, but rather a drift of the bundled library away from lib-upstream.

Also, having multiple versions of a library installed is very much possible, if the library maintainer puts in a little effort (and even if they don't, some distros have shown that parallel installation of different versions is quite possible). Slow migration from libfoo-1.2.3 to -1.2.4 is done today already.

And developers can't always depend on the latest library: APIs change, what they considered features, lib-upstream considers a bug and has removed it. Or the library breaks with a certain use-case lib-upstream doesn't care about.

The problem with as-many-as-you-want: who does the work and the testing? How does a user decide what is the right package for them? When things break, you have a much larger cognitive load for tracking down issues, since it's never quite clear what packages are affected (users tend to under-report pertinent details).

"Just use the latest" doesn't work due to the problems I outlined above. And patching it will lead to vulnerabilities nobody is aware of, it will lead to incompatible drift of patchsets and quite a bit of extra work for everyone downstream. A fine example of the mess this can become is MPlayer. but there are many more. One problem is: where do you draw the line between bundled and non-bundled libraries? The libs have dependencies themselves, and with patched and bundled libs that drift, you'll soon find that more and more libraries have to be bundled, until you're basically shipping everything but glibc.
bbd
07:22
For the pinning of overly-specific versions: in my experience, app maintainers will tend to require you to use libfoo-1.2.3.4, even if libfoo-1.2.*.* should work. Thus, if you have three apps of that sort, deduplication tends to not work, dragging you back to the era of static linking, despite a lack of necessity.

While this can already be a problem with today's setup, the distro maintainers between the end user and upstream will have none of that and will do a certain amount of QA (there was a recent discussion about the value of having distro maintainers, I can dig up a link if you're interested.)

As for the package features: we're already at the point where binary distributions have to make a choice in enabled features for packages. For example, Debian has exim-light, exim and exim-heavy, with the corresponding feature sets and dependencies. My prediction is that if upstream does the packaging (and you seem to imply that, or at least a vastly diminished integration role for the distro maintainers), there will be only one exim package, the equivalent of exim-heavy, with all of its dependecies always enabled.

(note: I am using exim as an example package, but you could also use something like Apache or PHP; moreover, I do not want to imply ignorance or somesuch on their end. Every time I had to work with those upstreams, they have been very reasonable and nice to work with).

July 12 2016

bbd
15:41
Problems:

- Apps will bin themselves to specific libraries, whether that's necessary or not. Thus the main advantage of shared libraries (updating LibFoo once) is gone, making security a nightmare.

- Because of the same over-specification, you'll have sixteen different versions of LibFoo on your system.

- Things become entirely non-optional. A vim package without X11 support? Sorry, we don't provide that. Or you get a small subset of the combinatorial explosion a package feature set like Apache's is (cf. exim-light, exim-heavy).

- Do avoid these shenanigans, app maintainers will do even more bundling of heavily patches versions of LibFoo, making it nigh impossible to know just how vulnerable your system is.

No, thanks.

July 07 2016

bbd
19:57
Courage:

Browsing http://www.soup.io/fof in public.

June 14 2016

bbd
15:23
The formatting of this slide annoys me more than it should.

June 06 2016

bbd
16:14

April 09 2016

bbd
14:16
One more reason not to eat oysters ;)
bbd
14:16

March 27 2016

bbd
08:45
No, not in the moral sense at all.

It is wrong because it leads to a very flawed mental model of what is going on. There are much better ways to think about AI, even if we don't know what it is made of, whether it has consciousness or not and so on.

Don't get me wrong, I am not saying that empathy or sympathy for anything (including dead things, for example because they represent a cherished memory) is wrong.

Problems arise when you ascribe that the thing has human-like emotions or mind states; human motivations or outlooks. If you do that, you not only fool yourself, you are also not doing right by the AI.

Consider this: a dog is a lot closer in its functioning to you than an AI ever will be. And yet, a responsible dog owner will not treat a dog exactly like a human, for it is bad for the dog and bad for the relationship between it and its owner.

Everybody realizes that you couldn't treat an alien like a human and that all your learned socializing is probably counterproductive when trying to communicate with an alien, or even remotely understand it.

An AI very likely will function so fundamentally differently to us that it will take a very long time just to be able to communicate properly.

One argument against this is that we as humans can not help but create AI in our own image, that any AI we make will be human just by virtue of the process it was created in. I think that is short-sighted and even a bit arrogant. My prediction is that the first truly sentient and conscious AI will mostly be an accident of sorts, a random confluence of events, ingredients and timing. If anything, history teaches us that at least half of all great and sudden discoveries were made to a large part by accident.

(n.b. the moral side of how to treat AIs once we are able to create them is a very deep subject that I have thought a lot about (and written two and a half novels on); I think it is a deep enough subject that I doubt this is the place to discuss it, if we want to do the topic justice.)

March 26 2016

bbd
07:17
I wrote about it here:

http://bbd.soup.io/post/681709012/it-is-wrong-because-it-clouds-your

Bottom line: anthropomorphizing things has its pitfalls. And thinking of a true AI as something with human motivations or even emotions is foolish at best.
bbd
07:15
I thought writing novels about AIs would bring me more insight into the reactions of people to them (and their prospect).

Turns out, it just makes it harder to talk with people about AIs.

Oh well.

March 25 2016

bbd
19:48
As an addendum: even if the software is entirely beneficial and so on, treating it like a human is condescending. It's like treating everyone you meet as a member of your own culture, with perfect knowledge of in-jokes and all that.
bbd
19:46
it is wrong because it clouds your perception of what it is and isn't. What it is capable of.

It may appear to have empathy and sympathy, when it is just mimicking the outward signs. Of course you could argue that there is no real difference between the simulation and the real thing, because it feels real to you (or whoever). The problem there is: it's still a simulation and it likely has faults. And those faults can easily result in severe emotional trauma.

For the truly dangerous side, consider a true AI. Ascribing human emotions, traits and behavior to it will blindside you. A proper AI may even play on those matters, goading you into its bidding. No AI is inherently beneficial to society, so we have to tread carefully.
bbd
19:40
That level of complexity is not to be readily dismissed, though. A pebble is not a mountain. And despite all our difficulties in defining true sentience and consciousness, Tay is not, was not and programming on that level likely never will be.

Do I really have to point out the Eliza effect? That happened decades ago. And yet here you are ascribing personhood to something that decidedly doesn't have it.
bbd
11:43
Uh... murder?! Of whom?
bbd
10:10
The original image was a comment on 4chan spinning a yarn about how there was a her there. Some being that was first violated and then killed somehow.

None of that is true. Anthropomorphizing software like this is wrong and can even be dangerous.
Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl