100 years in IT …

… sounds like a lot. if you look back at the last 30 years it’s really amazing what has changed in terms of storage capacity and speed. now think about the next 100 years. scientists say we will not be able to keep up with moore’s law (to double capacity/speed every 18 month), but there’s always something new coming out, new materials, new ideas.

however what i see often is certificates (private keys, code signing etc) that have a 100 year life. no system will survive that long, and if you upgrade you’ll get a new SSL or TLS certificate etc. some old files you’ll keep though, and there’s the danger. you give away old servers, old disks etc but then some numbers like your SSN or your birthdate stay the same for many years. there is a chance too that you’ll get hacked, or one of the companies you do business with.

many cracking or hacker attempts though would be easily prevented IF we would use ‘expiration dates’ more thoughtful. what seems secure today might be the joke for brute-force-attackers in 5 years from now. just set a realistic expiry date for your certs!

batteries are bad!

if you think how much material and weight goes in a battery, just to store a few electrons … and you always have some chemical reaction with some degradation of the materials used … somehow crazy.

how about storing ions, eg nitrogen ions, in liquid form in a cryotank? you gotta deal with the temperatures and the pressures, but since the tank would contain only ONE inert chemical there’s no degradation, and you can just liquify air (and filter thru boiling point) to get more nitrogen. the supply is almost unlimited! besides liquifying can be done thru a mechanical process only – compression – which could be powered thru wind, without going thru conversions etc, just a plain simple mechanical link from the turbine blades to the compressor

even if you don’t use the ion effect, you can still use the liquid N2 to improve efficiencies of existing processes (by creating a larger delta t while burning conventional fuels) or you use it straight for airconditioning etc and then drive another turbine with the exhaust to recover even more energy

preventing DDoS attacks

DDoS = Distributed Denial of Service – if you didn’t know this article is probably NOT for you!

anyways usually what happens is that the attacker sends millions of requests to a server (http,dns,sql etc) and overwhelms the servers to a point that noone gets a response in time, or the server stalls or crashes.

to defend we need to measure the timing: drones send requests from a single IP usually as fast as they can, or in regular intervals. statistics can show which requests are legit. a normal user will send 1,2 or 3 requests to the same URL, and then give up in vain and move on to another site to visit. the first request will have the longest wait time since you expect load times, then the user gets impatient, angrily hits reload a few times before trying other sites. also unlikely the user will try other GET requests within milliseconds from the same IP but with different browser. ALL this data AND its timing need to be recorded and then, if found to be legit, the second and third requests answered. statistics can help determine if the attacker systems use the same software – eg manipulating a few header bits on the server side to determine user system behavior. a power-user might try two or three different browser to get a site to work, but even that takes time IF done manually.

of course a smart attacker knows these tricks, but he can’t go thru a block protecting the servers! filtering uses a lot less system resources than serving a single request. there is a risk of denying a legit and confused user access that hits refresh 10 times, but the benefit for the many outdo the benefit for the few! you could always remember the unfortunate IP later and when the server is idle again get back to it

Where’s my SRAM???

as said before, i want some SRAM module for my PC. nice that you can get some triple channel DDR3 that gives you a max bandwidth of more than 25 Gigabyte per sec, but what if you need a few bytes here and there? LATENCY is the killer. or let’s say you run some virtualization software – with hundreds or thousands of context switches per sec. everytime your cache content becomes kind’a obsolete!

i don’t know the actual numbers of how much time gets lost – some cores can work on some other tasks while waiting for data, but it ain’t good. we see more SSDs, more RAM, more cores but what feeds a truly random access best? SRAM!

PLEASE PLEASE PLEASE – just a gig or two

of course some new software developments like randomization to prevent hacker attacks (eg spraying) make this approach only limited as for the speedup, but we need to differentiate between RAM data: you buy a speedy SSD or  15k disk if you expect lots of random access, and a few TB with 5-7k if you need space.  so why not do the same with RAM?

virtualization is another beast – frequent context changes NEED fast RAM for the heaps, registers etc but may not move much data …

no title

seems like wordpress eats up my blog titles. everything is ‘auto draft’ now. somehow true. nothing lasts forever.

EDIT : it was my wordpress theme. switched to another one and now it works fine … so far. let’s see what else broke now!

too many linux’s !

i love linux. i hate linux.

around me there are several laptop and desktop PCs with linux, tried about 8 different distros so far, and they all had some trouble with something. all needed some command line editing, mostly conf files, and checking or correcting starting services, blacklisting drivers and modules, getting my X11 to work, changing grub etc etc etc

if you didn’t understand the last sentence, don’t worry. just means you’re in company with 90% of all users, and you’re not ready to install or use it. stay away!

hurts me to say this, but from a support perspective it’s a nightmare. sooooooo many distros, different packages, update mechanisms, and mostly dependency problems.  nothing a ‘noob’ (aka beginner) can do. not on his own, not in a few days or weeks.

problems seems to be: there are too many distros out there. distrowatch-dot-com tracks over a hundred different ones. i understand the need for clusters, large memory systems, routers, voip and system admins for troubleshooting to have their own distro, but for the normal user that’s just too complicated. look at windows. very likely that if you download 32 bit code yo can run it on w2k, xp, vista and w7. all came that out in the last 12 years . you click on install, and after a while it should work, even do some updating automatically.

no distro i’ve seen runs smooth and easy, and i’m tired of trying! and don’t say i should compile something. i don’t have the time for it. period.

why E.T. won’t come

one could think of many reasons:

– the next galaxies / planets / stars are quite far away, a few lightyears, so that alone limits travel opportunities. aliens have lives, families, jobs, maybe limited lifespans, moneyproblems, ballgames they want to see etc,  so why would they travel many years just to see us? who wants to spend money on such a lenghty trip?

– they send robots first. once they see our limited technology, our wars, our bacteria and viruses and other stupidity they just laugh and check back a th0usand years later, via robot. good luck humans solving your problems till then

– what can they learn? they are so much more advanced than we are, so maybe it’s just entertaining but that’ll last only so long

– there’s gotta be more exciting and  picturesque places than earth. think dual / triple sunsets! 20 moons! supernovas!

– if you already got lightspeed and all the nice tech you need for such a trip, do you really need anything from here like raw materials (metals, cells, DNA etc) – why not just cook it up in your own fusion reactor?

– maybe they came from here, so it’s just educational for their archeologists to come back here, but why spend more time than necessary and contact us undeveloped humans?

– their planet is dying, their water is gone etc – wouldn’t they built something to fix that instead of spending their resources on travel? besides our planet is not so pretty clean anymore either

multicore vs bandwidth

IBM (C)(R)(TM)(etc)  has a new 17 core chip, 16 to do the work and 1 to ‘rule them all’ – problem is though that the memory interface has ‘only’ a few channels

do the math: 16 cores on a eg 4 channel bus makes 4 cores compete for one channel, right? a supersize, multilevel, multi-associative caches help, but with multi-gigabyte datasets that goes only so far, and there are lots of problems where CUDA etc doesn’t do so well

better solution would be to have small cores but with built-in RAM, and lots of them, and even if some spend half their life shuffling data (usually considered bad karma) you should still end up with an amazing overall bandwidth and latency (within the chip). across several cores you need a fast multipath router,  and you may loose one or two cores at a time due to transfers. if you got lets say 64 cores that doesn’t really matter. also may make memory use more efficient. think 64 bit: there are so far only a few apps that need more memory than 4 GB. give a gig to every core, and you have to use only 32 bit pointers locally. across the dataset of course you need more.

many problems can be split up into many smaller ones: video/picture compression and analysis – it’s all tiles, blocks, frames anyway, so each core can handle it’s own data and then transfer the result to the ‘mastermind’. games and simulations would benefit  as well – it’s frames and particles. render a few speculative frames ahead based on the probability of a player moving eg forward towards an object (door, enemy, food etc) and discard those if not needed.

my dear SRAM …

… i want you back. really. i’ve lived too long with my DRAM, though very fast too, but it’s the latency that makes me wait a lot longer than necessary. even if you want to minimize the tracelines on the PCB and still use standard chipset timings/layouts – fine – use the CAS / RAS access schemes, but get rid of the latency and precharges and all that!

with structures of .25 microns now you can fit a lot of sram cells on a die and still have a decent pricepoint. think of flash – you’ll get 8GB for $30 on sale, that needs about 6 times the area, and even more if it’s multilevel, but still, you could get a GB of SRAM for the same price. and that’ll fit all the programs you usually need (kernels, drivers) and some of the userdata as well

i’d rather have some SRAM in my system than SSD – the loading times don’t bug me as much as wasted performance while i’m running an app! and even 12MB cache in the latest CPU isn’t doing it for me. i want my SRAM back!!!

IP cams

i’ve been playing a lot with panasonic bl-c1 cameras, they get you about 4 fps on a 640×480 resolution with best quality (‘precision’ setting). there are 11 with half-duplex 100Mb network, i use the 4 other strands for power. about $100 a piece. server P4-1800-400-512 can do about 1fps per 5% cpu, fast enough for the purpose. needs 80MB RAM per cam to buffer!

what i’m missing though is a setting to pull BMP etc off the cams – it doesn’t make sense on the LAN to compress the pics, then transfer to the server, and uncompress them to analyse for motion. bandwith on the LAN is plenty, and cheap, but compression takes time and power ( = heat and $) … maybe some firmware change?