How I Passed the US History II CLEP

So, I’ll just open with an explanation: I waited until nearly the end of my undergraduate studies to fulfill the requirements of my lower-level general education. As a result, I was taking Literature and Music Appreciation while I was wrapping up Probability and Statistics and my Senior Project, and it certainly took away time I could have used toward those more major core classes. But, it was my own fault. I could have knocked those lower-level gen-ed requirements out sooner if I hadn’t put them off.

In the end, however, I was left with my Interpreting the Past requirement as the last standing thing for my entire catalog of studies. I didn’t want to take a history course if I didn’t have to, as that would mean dedicating another two to three months in a class. So, I took a co-worker’s advice and looked into CLEP. After verifying with my advisor and the person overseeing credit transfers at my university, I decided on US History II due to the expectation that living through some of what it would cover (from 1980’s on) might help me out.

For the sake of establishing proper context, I must clarify that I have not studied any American History beyond what is gained from watching main-stream movies and television series since my pre-2003 high school years.

I originally planned to take the test around March and purchased the official CLEP study companion, 2020 version, around January or February. After COVID-19 happened, all of the testing facilities for CLEP in my area were closed until further notice, which left me unsure of when I would actually be able to test for it. Focusing on keeping up with other priorities, I let studying for the CLEP go to the wayside. So, testing facilities reopened unexpectedly in June and I found myself totally unprepared. I immediately started searching online for ways to study quickly.

My main efforts led to Modern States, which is a site that offers free study materials (courses) for preparing to take AP and CLEP exams. I was mainly looking for a good source of study to simply prepare, but Modern States also provides a voucher to pay for the test fee at https://clep.collegeboard.org/ if you complete their course and provide them with a screenshot of your course progress showing you passed their practice exams successfully. This alone saved me $89 when I went to take the test, and so I highly recommend you do this even if you also plan to use another source for a lot of your study and preparation. In addition to providing a voucher to cover the CLEP exam cost (for which you pay nothing out-of-pocket), they also provide instructions on how you can submit proof of payment and completion of your exam at the testing location, and they will then mail you a check to reimburse for that cost, as well. Utilizing this essentially makes taking the exam free.

Modern States also offers links to other resources to help you study, which can be found under the Resources tab at the top of the US History II course section. I will say that purchasing the official CLEP study companion compiled by College Board is a toss-up, given that a lot of the questions in the practice test of the companion were also provided in the Modern States tests. At least this was the case for US History II.

I must admit that I don’t believe Modern States alone will get you where you need to be, and I certainly don’t believe I would have passed the US History II CLEP if I’d relied on it exclusively. I watched all of the A Biography of America videos and regularly visited USHistory.org, both of which are included in the Resources section of Modern States. I took all of the Khan Academy modules and section tests that were related to the period of US History covered for the US History II CLEP (1865 and forward – starting with Reconstruction), though I didn’t actually read or watch videos on Khan Academy for study. I also searched online and came across several good practice testing locations like USHistoryQuiz.com, HistoryTeacher.net and a faculy member’s website hosted at a southern California independent polytechnic school. I would regularly come back over several weeks to attempt these quizzes and practice tests, and I would make sure I read any summary that was provided against each question after it was answered – whether correct or incorrect. Another good place to look, which likely includes some of the places I’ve already mentioned, is the Free study resources section on this page at Free-CLEP-Prep.com. I’m pretty sure this is how I originally found the HistoryTeacher.net website.

In the end, I felt the test was a bit harder than what I was expecting going into it, but I managed to pass with a 65, which I presume puts me right in the middle of the passing ground that was between 50 and 80 (the test score ranges from 20 to 80, and 50 was needed for credit at my university). In reality, I used what I knew from studying to make an educated guess by ruling out answers I felt didn’t apply to questions I was unsure of. Whether or not I got those particular questions correct, I can’t say, given that a simple score is all that you get without knowing which particular questions were missed. Still, the only thing I actually purchased to prepare was the 2020 CLEP study companion, which just had 120 question practice test. I took the practice test once two days before my actual exam. I would say, without question, that the Modern States and other online resources that I used either to study or take practice tests were the most beneficial part of my preparation.

If you have the time and patience, reading the provided textbook on Modern States is definitely helpful, and it provides a lot more specifics and information than you will get by just watching the short videos that the professor lectures in.

With that, best of luck!

eCryptFS – Accessing Encrypted Drive from LiveUSB Linux with Known User Password

Thanks to another imperiled user at LinuxMint.com’s community forums (credit given below), I’ve discovered an easy method to access encrypted drives/partitions using a Linux Mint LiveUSB when the actual system is not able to be used to boot and access the drive for data recovery. This method assumes that the ecryptfs-utils package was used to encrypt the drive, and that the wrapped-passphrase was stored on the drive.

In the past, encrypted drives or partitions using eCryptFS required you to note a lengthy passphrase in order to recover the files – or, at least, this was displayed upon installation of Mint, Ubuntu and other distros after installing and encrypted the home directory.

However, simply knowing the user’s login passphrase is all that is needed for newer encrypted setups, as it appears eCryptFS now automatically stores the wrapped-passphrase on the drive where the data is encrypted to allow for recovery using just the user’s login credentials. Below are some rather simple and straight-forward steps for accessing an encrypted drive from a LiveUSB boot in these conditions:

  1. Simply mount the partition/drive from inside the graphical file manager. This was Nemo in my case, using Linux Mint.
  2. Open a terminal and enter the following command:
    ecryptfs-recover-private .ecryptfs/<USERNAME>/.Private/
  3. If it finds the location provided, enter Y (or simply press Enter, if it is the default option) when presented with Try to recover this directory? [Y/n].
  4. If you’re fortunate, it will find the wrapped-passphrase and then ask Do you know your LOGIN passphrase? [Y/n]. As long as you do (and there’s no reason you shouldn’t if you’re trying to recover your own data), then simply hit Enter or submit Y to reach a prompt to enter the login password for the user of the encrypted home directory.
  5. If all goes well (correct password, included), you’ll be met with INFO: Success! Private data mounted at [/tmp/ecryptfs.dIWKskOD].
  6. The last thing you need to note is where it has mounted the encrypted data, as it won’t be in the /media/ directory where your drive/partition is initially mounted using Nemo. For me, it was placed inside of the /tmp/ directory somewhere like /tmp/ecryptfs.dIWKskOD/. It doesn’t hurt anything to keep the terminal window open in case you need to reference it again, though I imagine it will be the only directory starting with ecryptfs. in its name.
  7. Simply navigate to the provided location and you’ll find the files from the drive/partition unencrypted to access and/or copy to a backup location.

I hope this helps. Also, note that if you’re drive is failing – as in my case – you may also want to use something like ddrescue to attempt salvaging as much data as possible.

Best of luck!

Credit: Thanks to fabien85’s post at the LinuxMint.com forums.

Direct File-Sharing in Linux using SFTP

If you have files on one Linux PC that you’d like to transfer to another, and using a removable flash drive would be tedious, time-consuming or inconvenient in some other way, then using SFTP by way of running an SSH server on one of the PCs will do the trick and might be the most convenient and/or quickest way to do it.

This is assuming that you don’t have a way to connect between the PC’s by some sort of shares via the graphical file managers. If so, then disregard. Otherwise, read on.

Note: This tutorial is aimed toward systems using the APT package manager (Debian, Ubuntu, Linux Mint, etc.), and so you’ll have to make some changes to the commands that search for/install packages through terminal and that start/stop services, as needed.

In order to follow the tutorial, you’ll need to install the following software:

  • FileZilla on one PC
  • OpenSSH-Server on the other PC

In most cases, both of these applications are available as binary packages from the default repositories that your PC is already configured to use. It doesn’t matter which system you install which application, but the one that gets FileZilla will be the one you directly use to control the transfer of files, so it is best to use the PC you’re actually planning to be on for that.

Note: Using SFTP, only the /home/$user directory on the machine running the SSH server will be available to transfer from/to using the SFTP client.

Linux & Windows Dual-Booting: Essential GRUB and Time Settings

If you dual-boot Linux and Windows, I consider these two things to be essential to be done. I’ve been doing them for a while and decided it was worth posting on the blog as recommendations to others.

GRUB: Remember Last Used Option

First, I feel it is best to have GRUB remember the last chosen boot option. If you don’t agree, simply don’t do this and your system will always boot into the first option (0), which is going to be populated by whatever Linux OS you used to install GRUB onto the PC.

The biggest reason why I prefer to do this is because most Windows updates typically require reboots. Some actually end up performing multiple reboots as the updates are applied. When I run Windows updates, I almost always find something else to do to bide my time, as they’re rarely ever snappy, and if I have to manually select the Windows boot loader in GRUB during each of those reboots that might happen, things are delayed even further. I can’t say how many times I had to reboot out of Linux and back into Windows to finish updates because of this. So, this resolves that issue.

Typically, the GRUB configuration is at /etc/default/grub, and this must be edited with either root or super-user privileges. By default, the following setting is defined as:

GRUB_DEFAULT=0

You can edit that line as part of the following changes, but I typically just comment it out by placing an octothorpe symbol (#) in front of it, and then add my changes directly above before saving/exiting the file:

GRUB_DEFAULT=saved
GRUB_SAVEDEFAULT=true
#GRUB_DEFAULT=0

Lastly, after you’ve saved your changes, update the GRUB configuration with:

sudo update-grub

Done.

Time Configuration: Use Local Time

I consider this one an essential for everyone. Other sites have done a much better job explaining why this happens than I can do. However, I will summarize and just say that Windows stores the local time into the hardware clock and pulls that time directly to show you on your desktop. Linux, instead, stores UTC time and then applies the offset to it dependent on what your local time-zone is. For me, it’s UTC-5, and so if I don’t do this, Windows ends up telling me the time is 5 hours into the future each time I boot into it after booting into Linux (until I go into Windows and tell it to update the time online). But then Linux shows me the time as 5 hours in the past until it’s been updated. The whole process repeats with each OS boot change.

From what I’ve gathered from searching online, this can be remedied by either making Windows use UTC time or making Linux use Local Time. Because I don’t care to edit Windows registry entries anymore than I have to (and it’s apparently the only way to change this in Windows), I chose to make the change in Linux, instead.

If you enter the following command, you’ll get the time/date settings and information on your Linux system:

timedatectl

If you’re ready to change Linux to using Local Time, you can do that and update the hardware clock all at once with the following command:

timedatectl set-local-rtc 1 --adjust-system-clock

Done. From here on out, Linux will store Local Time in the system hardware clock.

I hope this is of some use to others. This info is in numerous places online, but I felt the need to include it on my blog, especially together.

Linux Mint: Managing Kernels

Quite differently from what I’ve experienced with Linux Mint distributions of the past, the Update Manager now tends to install the latest kernel by default. Previously, it would have these updates deselected and require you to manually check them to be updated and installed.

For a while now, this hasn’t really been a problem for me. And, since I know that old kernels are preserved and can be booted from in Grub if an issue presents itself, I didn’t really concern myself much. However, a recent kernel update on one of my computers started causing a problem. I reboot into it twice and each time Cinnamon would crash and I would have trouble even getting to a point where I could reboot. So, I reverted back to the last supported version.

What you might want to consider, however, is which version you want to use and to do so manually. Looking at the release schedules for the official Linux kernels here and comparing it to what Linux Mint shows in the kernel manager within the Update Manager, it appears that there are some discrepancies. Whether different distributions make their own adjustments to how they support different kernels, I don’t know. I just know that I would prefer to use what is best supported for my Linux Mint installation.

Doing so is not that hard, and I would recommend you do the same if you’re not interested in performing an update and finding yourself having trouble getting Linux Mint started. Just follow these directions on how to manage your kernels from within Linux Mint’s Update Manager:

First, open the Update Manager by clicking on the small shield in your task bar:

Then, in the top menu, click on View -> Linux Kernels:

Click Continue at the warning. Do read it first, though:

Cycle through and become familiar with the available kernels. All kernels installed on your machine will say Installed next to them, and the one that is currently active will be listed as Active:

Personally, I recommend ensuring that the one with the latest support date be installed. This will usually be the one that is considered to be the LTS option available to you. Either way, make note of the one you intend to boot with, as you cannot actually choose the kernel to boot with from here.

You actually must choose which kernel to boot with from the Grub boot menu when you first start the machine. To get to this, go down below your Linux Mint 19.x… boot option in Grub to the one that says Advanced options for Linux Mint 19.x… and you will see all of the available kernels to use (the ones installed). Unless you know you need to do so, I would avoid selecting any in (recovery mode).

All of the kernels installed on your machine will be chronologically ordered with the most recent version at the top and the oldest version at the bottom. Select the version you wish to use and the machine will boot.

The next time you boot your machine, unless you have Grub configured to save and default to the last chosen option, it will boot using the latest available kernel. So, if that kernel has caused problems, you will want to remove it from your system to ensure it doesn’t get loaded in the future. You can do this from the kernel management area of Linux Mint’s Update Manager, and I personally recommend doing that over manually removing kernels using the terminal. After you have removed this kernel, Linux Mint will restore it as an available update for your machine and also show that there are updates available because of it. In order to keep from re-installing the problematic kernel, just right-click it and click Ignore the current update for this package. You don’t want to select Ignore all future updates for this package, as that would cause the Update Manager to never show any future kernel updates.

Linux Mint 19.x: Cinnamon and AMD Graphics

If you’re just now updating to a new Linux distro running the latest Cinnamon DE and you have AMD graphics rendering, you may be running into some problems. This seems to be particularly common among those with older AMD graphics.

I have an R9 290X in my machine, and after initial testing and booting into Linux Mint 19.2, everything appeared fine. However, after running updates and rebooting, I was introduced to a Cinnamon has crashed and is running in fallback mode. Do you wish to restart Cinnamon? That is paraphrasing, as I did not screenshot the error and don’t remember it word-for-word any longer, but you’ll know exactly what error pop-up I’m referring to if you’re seeing it, as well.

At first, I thought it might be an issue with Cinnamon itself, and started searching for indications that I should restore the system backup I did before updating. But then I started seeing signs of the graphics rendering being the issue. Some forum threads had members suggesting that hardware others were using may no longer be capable of running Cinnamon any longer, but I could not see my R9 card being unable to run Cinnamon, so I decided to log out and start Cinnamon using software rendering instead. This allowed Cinnamon to run fine, so it told me the issue was definitely with my hardware.

Since hardware support is typically located at the kernel in Linux, I started looking at trying different kernels. I reverted back to the previous kernel used prior to the system update with no success, then moved to test out the newest kernel versions in 5.x branch with no success. The last thing I could do at this point was hope for some way to get drivers that would support my system. The solution was AMD’s proprietary drivers located here. Following the instructions of extracting the tar.gz and running the amdgpu-pro-install script, everything went smoothly and a reboot had my system working as expected with hardware rendering.

The instructions for installing the drivers was straight-forward as laid out by AMD in the documentation.

I hope this helps someone.

Commercial DVD Playback in Ubuntu 18.04

This also applies to KDE neon installations using the Ubuntu 18.04 base, and possibly Linux Mint 19.x.

sudo apt install libdvdnav4 libdvdread4 gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly libdvd-pkg
sudo dpkg-reconfigure libdvd-pkg
sudo apt install ubuntu-restricted-extras

A lot of online advice suggest doing everything except reconfiguring the libdvd-pkg package, which leaves you without the necessary libdvdcss package installed on the system for most applications to be able to read the commercial DVD. Following all the above steps should install everything needed, and grab the latest libdvdcss package directly from videolan.org.

This is also explained by the VideoLAN devs themselves here.

Metal Gear Correction

The article The Mad, Unlikely Genius of ‘Metal Gear Solid’ is a fairly interesting read. I grew up on the MGS games, and I agree that they were revolutionary in terms of the style and technique of game-play that they presented. Great successors like the Splinter Cell games certainly owe homage to the MGS series to some degree. I also enjoyed reading about fondly remembered games like GoldenEye on the N64, as I spent many hours playing that one, as well.

However, there was a part of the article that I immediately disagreed with. Specifically, this paragraph…

The series begins with director Hideo Kojima’s first game, Metal Gear, released for the MSX2 in July 1987. Konami spun Metal Gear Solid off from two old-school MSX games, the original recipe and Metal Gear 2: Solid Snake, promoting the formerly 2D Solid Snake to a new generation of virtual espionage and combat. Kojima continued to direct. MSX hardware sold poorly in North America, and so most Metal Gear Solid players outside of Japan had never played the two earlier Metal Gear games. In October 1998, Metal Gear Solid was their grand introduction to a series that now continues, rebelliously, despite Kojima’s 2015 exodus from Konami.

The part, in particular, was…

MSX hardware sold poorly in North America, and so most Metal Gear Solid players outside of Japan had never played the two earlier Metal Gear games.

This implies that the Metal Gear games that preceded the first Metal Gear Solid game were exclusively published on the MSX2 system, which only did well in Japan. In all honesty, I’ve never played an MSX or MSX2 system, nor would I even recognize one if I saw it. However, I have played the first Metal Gear game, because I owned it on the NES. In fact, I even remember the cover art of the game standing out to me, because the soldier in the artwork reminded me of the actor Michael Biehn from big 1980’s action movies like The Terminator and Aliens.

Not that I have some sort of unhealthy fascination with Michael Biehn, but I was a sci-fi/action movie junkie as a little kid – which most kids were back then, and so I was quite familiar with movies like The Terminator, Predator, Aliens, RoboCop, etc. When movies like Blood Sport and Universal Soldier were relevant, I was even a huge JCVD fan. Many fail to realize, or remember, that before the advent of the internet, movies, video games and comic books were the escape of most young boys. That, and actually going outside. Another thing to note is that movies back then were more impacting and helped to engage the imagination. It’s rare that I see a movie today that I can sit down and watch more than once, but I would still sit down and watch those 80’s movies today if they happen to be playing and I’m not busy, so it’s clearly not just that they were such a shock at the time. They hold their value, in my opinion.

So, anyway. Even though I have no reason to question anything else in the article, I decided it was worthwhile to point out what I felt was an erroneous statement. Even though he covers himself by saying most MGS players outside of Japan had never played the two earlier games (which is still an assumption without data to suggest it – and one that I would wager against), his reasoning is still muddied by the fact that he’s looking at it from the point of view that the games were only available to owners of the MSX/MSX2 game systems; and at least the first game was actually ported to multiple other systems – which included the NES, MS-DOS and the Commodore 64, all of which combined to make a considerable presence in North America. Just sayin’!

Well, that concludes my rant for the evening. Good night!

Ubuntu 18.04 Live Installation – How To Reboot

Just a quick tidbit for those who found themselves stuck at the nefarious Please remove installation medium, then reboot. message that restarting/shutting down from a Live Boot of Ubuntu 18.04 presents. I’ve seen several people mention this problem after it popped up for me after running it for a test, but I didn’t see any solution mentioned. Everyone stated that they had to hard reboot their PC by holding in the Power Button. As most will find out, pressing Enter, Esc or any other usual common keystrokes to progress will do nothing. I even tried a console command such as sudo reboot with no luck – even though the screen doesn’t technically present a terminal prompt (just trying anything at that point).

So, do I have a solution? Yep…

CTRL + C

You’ll see your screen magically go black and reboot the PC (even if you chose Shutdown from the exiting menu in the Ubuntu Live Session). The only other nuisance I’ve seen this with (and this could just be something to do with my particular setup) is that the EFI OS boot manager gets altered and sets Windows Boot Manager as the first order option on the machine after running the Live Session. Unsure if that’s something to do with Ubuntu (it happens when testing KDE neon, as well, which is Ubuntu-based) or something to do with the fact that the PC booting is managed via EFI. Some might manage to boot into Linux and reinstall Grub to get around this, but really going into the BIOS/UEFI settings when the PC boots and just rearranging the OS boot manager order in the system configuration gets things back to the way they were before. An easy fix, but just annoying that I’ve had to do this each time I’ve tested a Live Boot of Ubuntu or KDE neon (ended up installing Linux Mint 19 from the first test of the Live Session, so didn’t see if it would cause the same issue).

I do think it’s stupid that the latest LTS version of Ubuntu doesn’t provide a more straight-forward approach to this situation (the historical Remove media and press Enter. has always worked well, and is still how other distros do it), but… there ya go. At least a proper resolution does exist until Canonical sorts out the emotional storm they seem to be going through to get back to the straight and narrow.

As for a return to Gnome… I thought the desktop looked fine. I’ve never been a fan of the Ubuntu purple theme color, but Gnome seems to run as well as Unity did in my previous Ubuntu experiences. I only started using Ubuntu around the coming of 12.04, so I wasn’t familiar with the Gnome 2 Ubuntu of past times. For what it was, Unity seemed fine to me. The only thing that pushed me away from using Ubuntu was Canonical’s more commercial minded moves to forcing Amazon, tracking and profiting off of dash searches and the such. Even if there were ways to get around it all, the problem is that Canonical wanted to force those practices onto its users in the first place. I’ll compare it with phpBB’s attempt to fund itself by profiting from video embeds in newer versions of their software. The difference, however, is that phpBB lets you know that this is something that they would like for you to do upon installing the software on your website, and provide appropriate (and simple) means of opting out of doing so. Like Canonical, the developers of phpBB provide their product as free-to-use, but gain some profits from services related to the product. However, where it seems to stand out to me (and this is from the layman point of view) is that they are being fairly transparent about their practices to attempt funding their work from your use of their otherwise completely free product. Canonical, instead, went about it in a way that seemed to imply that they didn’t want the user to know it was happening. I know this is a dead-horse subject that was beaten to that point several years ago, but that is ultimately why I ditched Ubuntu for other distros – even ones that link back to Ubuntu as a base. Up until recently, I’ve been happy using Linux Mint – and with the discontinuation of their KDE option, I’ll likely also be looking to KDE neon. With alternatives such as these available to me, I’d be quite surprised if I ever install Ubuntu on anything I own as a day-to-day use OS ever again.

HostUS: 2GB OpenVZ VPS Special

A belated Merry Christmas, Happy New Year and Happy Holidays to everyone.

HostUS is offering an unmanaged 2GB OpenVZ VPS between December 25th, 2016 through January 3rd, 2017 for $25/year. I know I’m late getting this up here, but just learned of it and there’s still a week left on this special.

The VPS features the following:

  • 2GB RAM (with 2GB vSwap)
  • 50GB Raid 10 Disk Space
  • 2 vCPU Cores (Fair Share)
  • 2TB Bandwidth (monthly)
  • 1 Gbps transfer speed
  • 1 IPv4 and 4 IPv6

The servers are available for the following locations:

  • Atlanta
  • Dallas
  • Los Angeles
  • Washington, DC
  • London

I’ve used HostUS for several years now and been completely happy with the service and support (when needed). I’ve received maybe 2-3 emails from them in that time stating that a VPS of mine was taken down temporarily for an issue, but it has never hindered me in any way (never seen the outage when it happens) and they have always promptly resolved the issue at hand and had the server back up with times given in the email. For what breaks down to less than $3/month for a VPS with these specs, it’s hardly a difficult choice if you’re in the market for a VPS, and they appear to never fuss about what you do on their servers as long as it doesn’t violate any government laws, result in exploited security vulnerability (such as DDoS) or cause any unjustified overhead on their server resources.

Like I said, if you’re in the market for a VPS (and OpenVZ suits your needs), these guys are probably as good as any you’ll find for the money.

If you want to look over location and network information for them, check out this page.