[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]
To search by the username of a noter use the following form. You can also use the links in the header to list all the conferences that had been indexed.

Enter a or part of a username:

Check to search on personal names (vs. user names)
Conference count:2,036
Total note count:9,564,979
Total topic count:1,373,231
Total body count:2,584,304
Unique Authors:294,040
Unique Pnames:184,941

What is this site?

If you're not an ex-DECie (employee of DEC, Digital Equipment Corporation, at one point the world leader in the minicomputer market and 2nd overall computer company to IBM) you probably don't care what this site is.

This site originated when I first heard of a new website on the WWW that was indexing all the USENET newsgroups (which existed long before the WWW, 1980 to be exact and originally used UUCP to distribute the news articles posted to the various newsgroups). All of a sudden you could see where any given user (poster) has posted across all the known public newsgroups.

Well at DEC we had something similiar to newsgroups we called NOTESfiles (aka "Conferences, and aka DECnotes, at one time also called VAX-notes because it originated on VAX/VMS only). I should also mention DEC had the largest internal computer network that was named the EasyNet that was worldwide (since DEC was) using a proprietary set of networking stack called DECnet. A NOTESfile was called "file" because it was literally a "file" containing all the "posts" with each post having an author, subject, creation timestamp, a "body" (lines of text), and also a "personal name" (aka pname) which was a user settable string that may be the users real name but often anything. Unlike the USENET newsgroups whose posts (articles) are distributed and copied across a vast network, a post to a NOTESfile stays is not distributed across the network, instead DEC was a leader in client/server (thanks to it's long history of minicomputing) anyone who was interested in a given NOTESfile (which were usually created with a specific purpose like "heavy metal", "singles", a given product like DECnet-ULTRIX, etc) would use a NOTES client (which was originally only on VMS but expanded to ULTRIX, PATHworks for PCs, and I'm not sure if RSTS or other operating systems got a client) to read the posts The posts in a NOTESfile were structured into a set of numbered "topics" and zero or more numbered "replies". Each post also had a unique identifier (UID) within the given NOTESfile. So...

I had the idea that I'd do something similiar to the site that was indexing USENET newsgroups but instead for our various (over 2,000) NOTESfiles. A co-worker of mine, Matt Thomas, had already written a NOTES callable client library and also wrote a X-Windows client for ULTRIX (DEC's 1st UNIX based OS), and I used his library to also write a terminal based (using curses library) client for ULTRIX (for the younger ones reading this even in the 1980's and early 1990's a lot of folks did not have a graphical workstation or PC, instead we often had a plain old video terminal like a VT100 that displayed characters in often common 24 lines by 80 column format).

Using the NOTES library I started with a C program and using the list of published locations (NODE::notesfilename) of NOTESfiles I started by downloading a list of all the posts (minus the bodies/contents of the posts) in each one saving the note title text, UID, timestamp, count of number of lines in the body, as well as the author** and the personal name**. ** = I actually saved the author and personal name only once, the 1st time they were seen, assigning unique author and personal name numbers to each, and storing those. Subsequently the program would only ask the server to download on information on new posts since we last checked ("seen maps" would be saved by the client and given back to the server to keep track of what posts have already been seen). After checking all the NOTESfiles for new posts was done another program would then index all the data on all the posts to later be able to map from a unique author (username), to map a username to all the personal names used by that user, to map a personal name to all the users who used that personal name, the top (by number of posts) noters in each NOTESfile, as well as the top noters across all the indexed NOTESfiles.

Originally I then had another program (in C still) or two that would generate output in plain text that could then usually be posted to a NOTESfile and then a while later I wrote a custom HTTP (web) server specifically designed to only handle one client request a time because I ran it on my personal workstation which of course I needed to use for work too. At some point I decided I'd start caching (saving) the bodies of the posts for certain NOTESfiles and have my server serve those up to if requested. This was the 1990's and I think only had an RZ23 (a hard disk drive, which I see one for sale online saying it was only 104 M(ega)B(ytes)!) so how much I could save was limited.


How was my original internal website received? Most folks liked it, some didn't care, but I did hear some employees may have been reprimanded by their bosses because it became clear some spent their entire day reading and posting in NOTESfiles all day long. You could literally with my tool/website see what time these folks got to work, when they left and got back from lunch, and when they left for the day. Which also had a few employees trying to hide their trail by going back and deleting lots of their posts... which I only knew because they would end up posting asking why their now deleted actual posts in the NOTESfile(s) themselves are still showing up in my tool/website. For those who read everything up to here likely already know the answer... because it would be time wise prohbitive to do a full download of each NOTESfile every night, only information on "new" posts is downloaded.

20 Years Later

It was 1997, a month before my 10 year anniversary with DEC, that the group I was in at that time was sold off to a startup and we were told we either accept the startups offer or we would be considered to have resigned on our own. That took me off the EasyNet so my tool/website and all the data eventually was burned to CD-ROM and while not forgotton collected dust. Then in 2018 I found out there as a Facebook group for alumni of DEC so I dusted off the data and my set of programs, and with just a couple minor tweaks all my C language set of utility programs compiled and accessed the data file as if it was 20 years ago except now on Linux. The http server I had however was no good for 2018, so using LAMP (Linux, Apache, MySql, Perl or PHP) I learned PHP (other websites I've written over the years were all perl) and had the tool/site up and running again... but on a server I couldn't put additional load on, and I only had one bite for another ex-DECie to host and they decided to back out. Only much more recently did I find a free website hosting company that had both more than a 1GB disk space limit (the site needs ~ 3GB) and some larger file sizes (my largest file is 51MB).