Big changes coming soon. I've successfully migrated the BBS to a VM platform - which will make it easier to back up, restore, and it is a faster machine. It is up and running right now - but I'm still working on the fine details to ensure it is as seamless as possible. I'll probably have to rebuild it - because this was kind of the test run while I figured out how to make it do what I wanted. Now that I've figured it out, I can make sure it is configured exactly like this one.
So, not that you'll notice, but there might be some small downtime - although it might be as simple as having them both online, making sure no one is on, and changing the IP addresses to make name resolution point to the other machine instead of this one.
Super cool data-center nearly pro-grade stuff I've got going on here.
The thing comes up way faster for initial login on the new machine. I'm not sure if that has more to do with the faster core (but it only has one core, where this one has all of 'em)... or because the VM is only bound to one nic... I'm leaning toward the latter - there is a lag on this one that seems like it is going through things that are timing out and failing before it finally resolves right. That isn't there on the VM. Hard to describe in laymans terms - but basically the VM has one NIC and it is pretty directly exposed and it is the ONLY way to come in. The production server on bare metal has two doors, a front one and a back one - and I think it has to check both doors every time it hears a knock, basically. The VM knows that any time it hears a knock, there is only one way in.
Yeah... I'm pretty happy with myself figuring out how to get Proxmox running. So much that I'm almost considering just migrating over and taking the loss of the few messages or file uploads that have happened since I did it. Things aren't quite perfect though - I'm using XFCE instead of Gnome because I thought that Gnome had some problems with VNC - but it works fine under Proxmox... and I didn't set up all the same user accounts on the test box as on the production box - so I'd have to fix that.
I could just do it manually - but rebuilding a final VM from scratch is probably the easier way to go. I'm just eager to get things up and running now - but there really isn't any reason to rush.
Especially considering the volume of traffic here.
I've got a roadtrip this weekend - so nothing is going to happen until next week.
But I've definitely got my bases covered on recovery if something goes wrong on this server. We would lose a few days messages at most and recovery time would be very quick and painless. That is awesome - because for the last few years, this site ran without much of a safety net.
I keep forgetting I have a gameplan. I'm going to do a P2V using Clonezilla to basically make this machine an exact VM clone clone. This will require BBS downtime - but it is going to be easy, and what I bring back up will be *exactly* this BBS at the time I took it down.
It is a bummer that Proxmox doesn't have the kind of P2V utilities built into it that commercial Hypervisor programs have - but that is just reflective of this being a Linux product and the Linux philosophy. There are already ways in Linux to do this - so why recreate the wheel and bloat the program when not everyone will want or need that "feature". It isn't invalid reasoning - but it is often less convenient.
I'm more reactive than proactive, so I drop by, look around and leave. Sorry!
Extended downtime today, cloning from the production to the target. Looks like the production server uses UEFI instead of a legacy BIOS... and Proxmox by default uses a legacy BIOS.
So, I've got to figure that out.
Yeah, that is kinda funny. I've been exhausting myself every day so I don't have the energy to hop on here at 2am. I put a screen up in the carport, dug some flower beds, tilled them, built a greenhouse, weedwhacked the pasture so we can eventually mow it (too wild still), and a dozen other things. It's been a big month!
Right now I'm watching my eight-year-old do four-digit subtraction. She's pretty good. More impressive is my eleven-year-old doing algebra better than her 14- and 12-year-old siblings.
Meanwhile I'm supposed to get my new motherboard tomorrow. Can't wait to mount it on a nice piece of oak and plug in the CPU, RAM, and power supply.
Next month supposedly I'll have a career again. Then I'll have breaks in the daytime and a computer that isn't my phone for checking the BBS. W00T.
Whoops, I neglected to hit Reply. What I was calling funny is that the Pi version had better traffic. That should also contextualize the rest of my post neatly.
It does. I've been having trouble getting on, too. And I'm struggling with the motivation to rebuild the Debian install and port the BBS over to the VM. I should do it, I really should... there are a bunch of things I should do...
The navigation said 10 am to 3:30 PM from Vegas to Phoenix.
We did it in 4:43 - with a stop for a newspaper and another for gas.
Plus, all the other Vegas stuff, then work on Monday. Plus waking up at 5:30 AM to drive my wife to the airport.
I'm kinda all tuckered out.
Thu Apr 22 2021 21:04:27 MST from Wangiss <firstname.lastname@example.org>
Whoops, I neglected to hit Reply. What I was calling funny is that the Pi version had better traffic. That should also contextualize the rest of my post neatly.
It has been 3 days since that last post. What has changed? Got the PC mounted to a piece of oak, yet?
We will have lost some messages from today. I migrated the machine. We are now running in a VM on a hypervisor. This creates several advantages for me, and for you as end users.
Otherwise, the change should be transparent, I believe - and may even have some performance benefits. I've thrown 2 processors at it for now. It may run faster. If it doesn't, let me know, and I'll see what I can tweak.
I'm super stoked about getting this migrated over to the VM. I dislike that we lost some messages - that is bad for traffic and conversation - but it'll be a piece of cake to backup and restore the BBS now - to move it to other hardware if there is a hardware failure, and to expand it as necessary. I'll probably explore some of the clustering and high availability possibilities that open up if I put up another Proxmox node and shared storage - I've got built in resource and performance metrics allowing me to see in real time how the VM is doing... I set it up as a 1 CPU system initially, and just added a 2nd CPU... I also have 8GB of memory but I'm pretty sure it is fairly trivial to pop it up to 16GB. These NUCs just use laptop style memory.
I'll have to reboot the VM to get it to use the 2nd CPU, and will probably do that at some point tonight. You can't evidently add resources and have them recognized "on the fly" in Proxmox. I feel like things are running a little choppy in some places with only the 1 CPU allocated.
There isn't a direct path to the BBS for me from the internal network. I can open up a local console, and then connect there locally to the Citadel - but the way Proxmox works I have my management console dedicated and published only on my internal network, not accessible from outside, and the VM/BBS dedicated and published only on the public network, and not accessible internally. Hard to explain - but it seems to be a limitation of the way Proxmox handles networking - and took me a while to figure out. This is different than how other hypervisors handle it.
There is a good Proxmox vs. ESXi review here:
That basically confirms most of my experiences with Proxmox - where it is superior and where it still lags behind. Mostly network configuration. For being a "Networking OS" - Linux as a whole takes some fairly bone-headed approaches to Networking configuration. If there is a hard way to do networking, Linux goes, "hold my beer," and makes it even more convoluted.
But - it has its rewards if you're willing to fight with that.
So, I ate up a 1TB SSD pretty quickly with Proxmox, and I'm at very high utilization on the disk capacity already.
I've bought a cheaper NAS to set up as an NFS or iSCSI device for Proxmox - and I plan to built a second node using one of the other NUCs I have. We'll have pretty good coverage then - and that will allow me to backup and create clone guests on the secondary Proxmox node - and that way if I fuck things up like I did last night, recovery will be quick. I was fortunate to land on my feet this time, and having resources tight made it riskier than it needed to be. I'll have to get another 1TB SSD to cover the second node - I could swing that now but I'd rather wait until my next paycheck. I'm also about to drop about $2k on a new paintjob on my Z3 this weekend - so... I'm trying to be a bit frugal.
Anyhow - yeah. Things are running well - but I'm not at the finish line for this quite yet.
I may also not fully understand the logical volume layout of Proxmox. It looks like I've actually got the disk images for the VMs in a virtual pool of about 900GB and about 400gb of that is used even though the three systems I have VMs for have 750gb of total drive space. The disks are dynamic, I think. Which would make sense. This is part of the hosted Just In Time resource pooling strategy that enterprise VM hypervisors employ so that Cloud providers can promise you "more CPUs, more memory, and more drive storage" at any time.
What you are buying with a hosted server is a fractional timeshare of a much larger computer - and you get a virtualized slice of it.
The whole computer is there, it is shared, and if you only need a bedroom and the kitchen for a few hours a day, that is all of you get of it.
So, today I brought up a NAS as a backup store for the Citadel. I probably could have done it cheaper - but I still have ambitious plans to create a test Proxmox environment and ship VMs between the production and the test device for testing and experiments.
At some point, I guess I'll build out the test environment as more or less a mirror of the production environment, move a copy of this VM over there, get it running, and start playing around with some of the remaining outstanding broken things that bug me and I would like to fix. #1 on that list is getting the redirect to the "Hello" room working before user login and after user logout. I had that working - it broke - I tried to fix it, and it broke things REALLY bad, and that was a nightmare to fix.
But with backups and snapshots - especially in a test environment - I'll be able to figure out how to fix this without risking an extended downtime for the actual BBS - which I am terrified of. The BBS can't handle losing the few remaining semi-regular callers we get.
All told, I'm in about $500 when I originally had it running fine on a $75 Rpi, not including recurring costs like the ISP and the domain name registration. But honestly - when you think of how much the PCs we used to dedicate to Citadel were back in the day, and do the adjustment for inflation - even THAT cost is CHEAP - especially for a BBS that anyone in the world can access, that multiple users can access at any time... technology is incredible. I think the original Sanitarium was about $2500 in equipment in 1987 dollars - and was a single line BBS that one user at a time could access at 2400bps. :D
But beyond that - I'm doing the tech part of it for the challenge and learning and just the satisfaction of *doing it*. It does make me realize that it is ridiculous that I am an out of work IT professional - partly because the profession is afraid of me because of some of my public positions on issues (and white guys who are conservative *are* blacklisted professionally - it IS happening)... and part of it is because I'm not interested in dealing with their bullshit - the social bullshit, the expectations bullshit - all of the bullshit.
But after achieving this - it is clear that not having me in the IT workforce hurts the industry more than not being in the IT industry hurts me. I'm *really* good at this shit in an industry full of people who are mediocre at it.
Because of Proxmox, I can see that we're a little resource starved with multiple users logged in and only 4GB of memory allocated to the VM. We never hit the ceiling that I've seen, but we get up around 3.5GB of 4 GB allocated. That is a pretty close margin.
I'd like to buy 16GB and slap it in there and allocate 8 to 10 GB of that to TSBBS. I think that would make things speed up. It is only $90. But, I just bought the NAS - and dropped $1800 to repaint a 25 year old BMW - not to mention dropped a LOT more than that on a 2020 BMW...
So I'm *kinda* tapped out right now. I'm also carrying a much bigger balance on my cards than I like because my wife paid off the WRONG card last billing cycle.
So... it may be a while before I can bring myself to do this upgrade - but I probably should, once I get to a point where I can.
A little bouncing tonight. I see a definite improvement in the test environment bumping the VM up to 4 cores and 6GB of memory. I bumped production up to 2 cores and 5 GB of memory - and as expected I feel like there is a little less improvement than if I gave it 4 cores and 6. Once I get a memory upgrade on the prod server we'll go to 8GB of memory... and I think 3 CPU cores - and I think that will create a noticeable improvement in performance.
There is something wrong with production where it is running multiple instances of Webcit and constantly trying to relaunch webcit, finding a port conflict (with itself) and shutting down. That is probably eating up some cycles and hurting performance. I'll be continuing to troubleshoot that. Oddly, it is not happening in my test environment which is built off a copy of the production environment. Not sure what the deal is there.
The main advantages of hosting a VM are management ones. It becomes far easier to backup, take a point in time snapshot. The VM just turns the entire computer into one big file - so, for example - you can have a production server and a test server - and if there is a change, you can try it on the test server first and if it blows up, it is easy to restore the production as a copy to recreate test and try again. It makes catastrophic things way less catastrophic, in general.
You can also add shared storage between two different physical VM nodes, each running a copy of the guest VM - and the VM hypervisor will manage which is attached to the shared storage, and if the production server goes down, it will fail over to the target server - transparently to end users. This is a high level enterprise feature where you've got to have 99.999% uptime... but it is an option once you virtualize.
There are security advantages too. If someone compromises your VM - they still haven't compromised the HOST machine, and that is difficult, and you can sandbox the guest machine and the VM bare metal from your other networks - plus Proxmox has a built in firewall. The VM management console generally has some performance and resource metrics too - you end up with a kind of management console that wraps around the entire machine your Citadel runs on.
You also allocate it portions of the resource of the bare metal. If you're on bare metal, you can't really limit the amount of CPU, RAM or hard drive that is allocated to your Citadel - but you can with a VM. If it needs more, you can assign it more, if it needs less, give it less and you can run something else on another VM. Right now we're running on 2 cores of 4, 5GB of 8, and 250gb of 1TB. The remaining resources on the bare metal are available to other things - like test environments where I run the latest versions of Citadel against my production database and decide if I'm ready to go live with those.
Tue May 04 2021 00:29:20 MST from ASCII ExpressWhat advantage do you have running Citadel in a VM? I've never played with that. I just have it running on a VPS, but could easily port it anywhere. Of course back in the day we ran them on our regular desktop computer, and if someone called your BBS then too bad, you couldn't use your computer until they logged off.
So I wonder if someone could set this up...
As an embedded page in a "bulletin" type room in Citadel...
Running a version of Asgard-86 - set up with classic BBS door games like Tradewars...
And in that manner, you could go to a room on the Citadel, and open a traditional CLASSIC Citadel - and then launch a door in that to play door games.
It would be a kind of end-run on this version of Citadel not having doors. But I'm not sure where I'd start.
Castle Adventure game at DOSGames.com
- Newest Additions
- 3D Shooting (FPS)
- Ball & Paddle
- Classic / Arcade
- Educational / Kids
- Game Creation Systems
- Role Playing (RPG)
- Simulation / Strategy
- Tetris Style
- Traditional (Board Games)
- ALL Games
- Random Game!
- Advanced Search
castle.zip - 43k - Run CASTLE.EXE to play
Found your game? Great! Glad you found it! Please consider saying thanks by making a small donation to support DOSGames.com. There are also other ways you can help!
Need help running the game? Check our DOSBox Guide to run DOS games on modern computers.
This is a very old "graphic" adventure game which uses textmode / ASCII characters. Despite its simplicity, there is something charming about its gameplay, and it is nostalgically remembered by gamers who played it in the DOS era. Many adventure games back in 1985 were still text based, so having any kind of visual representation would no doubt be welcomed. Anyways, the game itself is rather average, but it does include action and adventure elements, although there is very little plot. It plays sort of like a Kroz-type ASCII game mixed with a text adventure. A modern Windows remake is available hereby Chris Benshoof and is likely a better way to play this game these days!
Play Castle Adventure in Browser
Game will not be saved after closing your browser. Download and install the game if you want to save.
CTRL-F12 attempts to speed up game, CTRL-F11 attempts to slow it down.
Related / similar games:
If you enjoy Castle Adventure, you might also enjoy playing these games:
Hosted by Liquidweb.
Thank you for visiting dosgames.com, and I hope you will enjoy your stay. Contact me here.