Man, furlough and layoffs are quickly approaching.

The flows thing at work has hit a mild chicken vs egg problem. One of our servers has a general lack of disk space and due to that we're only locally storing flows from our border router. Well, we want to use the l33t inotify to monitor when new flows are sent our way, but the feature doesn't work on NFS (obviously).

So I was thinking, well, maybe have networking just scp the file over to us, or cp it to another NFS directory that would just be kinda like a sinkhole for data; we'd remove it when we're done. Since both ideas require work on networking's part, it's not likely to get done any time soon.

So we figured, well hey, just store the files locally, but we run into our lack-o-disk problem. Anyways, other problems also came up. One of them is the speed at which the SQL commit files are created. Python can do a 160 meg file in ~30 seconds. PHP can do the same file in ~2 and a half minutes : (

Well, no biggie, I wrote the python equivalent of the PHP script, so I just need to add PostgreSQL connection stuff to it...except the psychopg website is down until further notice...damnit.

I think we'll include both files just in case we need to switch from one to the other. While I was thinking about the huge amount of resources it takes to import the full netflow set, I came across another solution. If I separate the .out file creation from the SQL file creation, I can throttle the SQL file creation using just another daemon. I'll do that. That way Joe's thing can burn through new flow files, and my thing can tick along importing them as quickly as resources will allow. I also have a poor-man's throttling thing set up for my daemon scripts so if the need arises, the daemon can start more concurrent ingests than the default 20.

Man, I wish RHEL came with inotify like 10 years ago...lousy system upgrades