Last year, we purchased a pair of Peplink Balance 380s for our office. Their ability to load balance across multiple Internet connections including using a cellular USB dongle as a backup connection was very attractive. I received the pair of devices and without too much difficulty got them connected and routing traffic in and out of the blocks of IP addresses we have with two Internet service providers.
I tested the load balancing/failover by pulling the plug of one of our Internet connections. The Peplink router quickly moved all traffic to the remaining connection. Over the last year, none of our employees have ever even noticed when one of our connections has gone down.
Several months ago, I tested the reason we purchased a pair of them. Once configured in high availability mode, the secondary router is supposed to take over for the primary upon failure. I simulated this by pulling the plug on the primary while pinging the virtual gateway IP address and an IP address outside of our network. The results were impressive:
- 7 seconds total for the secondary router to re-establish internal connectivity.
- 13 seconds total for the secondary router to re-establish Internet connectivity.
The primary router was configured to re-establish its primary role upon rebooting. I plugged it back in, and the results were similarly impressive:
- 2 seconds for the primary router to re-establish internal connectivity.
- 8 seconds for the primary router to re-establish Internet connectivity.
While purchasing two of these routers cost quite a bit more than just purchasing one, the pair allows us to sleep soundly at night knowing that if one fails, our Internet connectivity will remain intact and business can continue normally while we replace the faulty router.
My master’s thesis has been published on Marquette’s website in its entirety. In a single sentence, I turned a Linsys router into a telephone. Since then, Kyle Persohn has expanded and improved the work I began.
After installing Ubuntu 11.04 on my Dell D620, I began noticing some wireless connectivity issues. This included delays or problems connecting to my home wireless network, increased latencies particularly when transferring files, and occasional disconnects. After upgrading to Ubuntu 11.10, the problems got worse. Doing some searching online revealed some possible solutions.
Installing the “b43-fwcutter” and “firmware-b43-installer” packages and rebooting the laptop is what ultimately worked for me.
aptitude install --quiet --assume-yes b43-fwcutter firmware-b43-installer
A couple years ago, I built two servers and used EVGA 680i SLI motherboards. I chose that particular board because it had two Ethernet jacks and six SATA ports. At the time, I also purchased three SATA hard drives and a SATA optical drive. I plugged the four devices, installed Ubuntu 8.04 LTS and thought nothing of it. When I updated one of my servers to 8.10, I noticed that one of the newer kernel versions didn’t seem compatible with the drive configuration. I used an older kernel version, and eventually, I replaced SATA cables and switched the active SATA ports around. Eventually, it began working correctly on the latest kernel. I upgraded to 10.04 LTS, and things continued without incident.
However, a couple days ago when I decided to install a fourth hard drive, I again ran into the same problem. I did some searching and discovered some possible bugs. One of the solutions is to build a custom kernel. I opted to simply shuffle the SATA cables around again and moved all four hard drives to the four ports facing upward (ports 3-6) on the motherboard. I moved the optical drive to one of the two ports facing outward (port 1) on the motherboard.
Since the problem occurs during the boot process, and only seems to affect ports 1-2, all four hard drives function properly, and I can still boot from an optical disc or mount a disc once the computer has finished booting. Unfortunately, this solution makes adding a fifth (or sixth) hard drive impossible, but it’s a solution I am willing to live with until the problem is resolved (if it is resolved).
Ars Technica recently wrote up an article ISPs hijacking DNS requests to watch web searches. A couple years ago, I discovered that any time that I punched in an invalid domain name, instead of telling me the domain name did not exist, I was redirected to a search page. The search page had an opt out feature, but it reset after a few hours. I wrote a script to automatically opt myself out every few hours, but it was ineffective. When I called CenturyLink (my ISP) about this problem, they first denied it. After arguing with the representative for a while, he eventually informed me that this was how the feature was supposed to work. I asked him how that could be useful if the opt out really wasn’t an opt out. He didn’t have an answer. Eventually I opted to use alternative DNS. However, one solution for those of us running DD-WRT on our routers is to add additional DNSMasq options. While OpenDNS does honor opt outs, I still add the IP addresses they use to my configuration.
Before adding anything, pinging an invalid domain shows:
PING garbage.invalidtld (126.96.36.199) 56(84) bytes of data.
64 bytes from hit-nxdomain.opendns.com (188.8.131.52): icmp_req=1 ttl=56 time=54.4 ms
I went into the Services page of DD-WRT and added the following to the “Additional DNSMasq Options” section:
Now the same command returns the proper response:
ping: unknown host garbage.invalidtld
I could have applied the same method to filter CenturyLink’s DNS responses, but I have been happier with OpenDNS and decided not to switch back.
As Dojo suggests on their website, I opted to use Google’s copy of the Dojo Toolkit:
This worked very well until today. After doing some digging, I realized that according to Google’s documentation, the URL has changed:
I have to imagine that this will be bad for a lot of websites, but at least the solution is fairly simple.
First, I set up the styling (no scrollbar, font, font size, background, and foreground colors):
xterm +sb -fa monaco -fs 10 -bg black -fg white
Next, I redirected the output and backgrounded the process:
xterm +sb -fa monaco -fs 10 -bg black -fg white > /dev/null 2>&1 &
This worked well for quite a while, but when I spawn a shell in an arbitrary directory, I wanted my shell to start in home so I added:
eval $( cd ; xterm +sb -fa monaco -fs 10 -bg black -fg white > /dev/null 2>&1 & )
Finally, I wanted to fully disown the new xterm from the shell I spawned it from. Therefore, my .bash_aliases file now has:
alias term='eval $( cd ; xterm +sb -fa monaco -fs 10 -bg black -fg white > /dev/null 2>&1 & disown %1 )'
Now I can cleanly spawn a new terminal that sends no output to the existing shell.
Dual screen configuration used to be quite the hassle on Linux. However, Nvidia has made it incredibly easy with their nvidia-xconfig command. The “–no-logo” argument eliminates the Nvidia logo when X starts, and “–twinview” enables the second display.
nvidia-xconfig --no-logo --twinview
Now I can configure my systems for dual displays during an Ubuntu installation without the need for reinstalling an old hacked together xorg.conf file.
Since Ubuntu Hardy Heron, it has become much easier to install Flash on Ubuntu, but the included restricted packages always leave me a bit disappointed. Luckily, Adobe provides a proper 64-bit version of Flash for Linux called “Square”. Since I tend to automate my installations, I wrote a script to install the latest version of Flash on my computer:
# Remove any installed Flash packages
aptitude remove --quiet --assume-yes flashplugin-installer flashplugin-nonfree
tar xzvf $FLASH
mv libflashplayer.so /usr/lib64/mozilla/plugins/
Now Flash runs properly, and with the switch to “Square,” it even seems to consume fewer resources on my machine.
Recently, I tried to dump data from a production database and import it locally in a development environment. I went through the normal process of dumping the data:
mysqldump database > database.sql
And importing it locally:
mysql database < database.sql
However, I quickly got a duplicate key error:
ERROR 1022 (23000) at line 1170: Can't write; duplicate key in table 'sys_tracking_archive'
After some looking, I discovered the “–insert-ignore” option:
mysqldump --insert-ignore > database.sql
The second attempt to import the data worked correctly. Alternatively, I could have replaced all instances of “INSERT” with “INSERT IGNORE” in the original SQL dump file.