I just upload the article here which discusses the BASH script I created to call FreeRDP and prompt end-users for credentials. I hope you find it helpful.
I just finished a detailed write-up on getting FreeRDP compiled on the Raspberry Pi. I came across a lot of sites with comments requesting how to do it, but no real in depth information on how to do it. I spent a few days figuring it all out. I even got a rudimentary GUI to prompt the user for a Username/Password combination. The compiling steps can be found here and I’ll follow up with how I setup the desktop icon and prompt the user for credentials. I hope this can be useful to the internet community at large.
I recently was called upon to perform a data recovery on an HP desktop that was setup with a RAID 0 volume. As you can imagine the volume failed when one of the hard drives started acting up. I had to resort to all my old tricks and some new ones in order to get some (but not all) of the data my friend required.
Step 1) This involved using my old friend, DOS based Ghost run from a boot disk. Last version I have from this era is 11.5. I was able to image 143GB of 166GB. Ghost hung on massive numbers of bad blocks. This was going to take days so I decided to stop here. I wasn’t sure what data I had until I can get it to a working system.
Step 2) Normally, once I have a ghost image I would open GhostExplorer. This wasn’t an option since the Ghost Process didn’t finish normally. I then started to look around to what I could get/use to recovery the data.
Step 3) This was a process I didn’t try before. I actually created a .vmdk file from the collection of ghost files (*.gho’s) the DOS version created.
Once I had this .vmdk I loaded it into a virtual machine to see what I could recover. To my surprise it actually worked rather well. I was able to get most of the User Profile folder and some Program Data folders that were critical. This was the first time I attempted a Data Recovery from a RAID system. I’m rather good about backups, so I personally never have to perform any data recovery myself. But knowing how people are at home, I know every day someone out there loses everything on their hard drives never to see it again. I also know Hard Drives don’t last forever, so I rotate my data to new media every few years.
Lesson here is to be proactive with your data. Backup at least every quarter so you don’t lose everything in the even of a failure. I’ll add on an expanded section to review the commands I used to create a .vmdk from ghost files and the few hurdles I had to overcome due to a bad ghost image.
Today I want to talk about my two favorite Open Source firwalls; Monowall and PfSense. If you’re into Open Source at all, you’ve probably come across Monowall and PfSense. I personally love Monowall. It’s small and fast. If you put it on really nice hardware, you will have great performance. The only issue I still have with Monowall is lack of Load-Balancing and Fail-Over capability. That’s when I came across PfSense. PfSense was branched off of Monowall to provide more enterprise level functions. PfSense includes Load-Balancing and Fail-Over. You can check out this link to see how to setup a Fail-Over Firewall with PfSense.
There were times at a job where a firewall just failed. I had a client who actually had two firewall fail at the same time after Hurricane Sandy. That wasn’t fun. My first instinct was to run to my good old Monowall, but I didn’t have enough network cards to implement the solution I wanted. I found a hardware router with enough ports to get them going until a long term solution was applied. At another site, I just needed two network ports. So in less than an hour I had found a computer, installed a 2nd network card, downloaded Monowall and configured it for their network to get basic functions running again. This client had to wait for a new firewall to be shipped overnight.
Both Monowall and PfSense provide virtual machine images to make testing a bit easier so you can find the one to best fit your needs. These are worth looking at if you’re tight on a budget and can’t afford Commercially Supported products yet. PfSense offers a paid support subscription that may interest some of you.
So just remember. If you’re ever in a tight bind with a failed firewall and you don’t have high-availability in place, Monowall or PfSense could come to the rescue for you and your company.
This week I had a customer who requested that the App Flow Monitor function be activated on his Sonicwall Firewall. This was simple enough and required a reboot. I then noticed that the Sonicwall could contain the data locally or can send the data to an external Netflow Collector. My curiosity being piqued, I started to research what the Netflow Protocol was.
Netflow is a protocol developed by Cisco to collect network traffic information (straight from wikipedia). I then began looking for a free open-source netflow collector. My corporate clients being Windows based, I tried a few Windows based solutions. They were usually trial or limited versions of commercial products and didn’t fit the bill. I then expanded my horizons to Linux. I then came accross this post and saw that I could install it with good old “apt-get install” in Ubuntu. So I built a guick simple Ubuntu server in a test VM environment and installed ntop. I followed the posts instructions to get to the web interface, but then I got a little stuck. Had to play with it to get it started. Activating was easy, but I had to figure out how to actually “add a netflow device”. I’ll included more detailed instructions hopefully (time allowing). Ntop is rather easy and simple to setup. I wasn’t able to put it to test against a live router yet, so I don’t know how useful the data is. I do have a home ASA5505 I plan to monitor to see what kind of data I can collect. I’ll post more on the topic in the future.
I discovered that when I ran “apt-get install ntop”, an older version of ntop was installed. Ntop on startup mentioned that a newer version was available and recommended upgrading. I then came across the following link to install the newest version. If you’re running Ubuntu or Redhat/CentOS you just add the repositories for the installers. Follow the links at the names for instructions to add the repositories for Ubuntu and Redhat/CentOS. Remember to do “apt-get update” or “yum update” to refresh the repositories after adding them. You then can go to “http://ip address of your computer” to get to the nbox management interface and then access Ntop from there.
My main goal with setting up this blog is to share my insights and discoveries in computing. These include my experimental setups surrounding Linux and my professional experiences with Data Center Systems (SANs, VMWare, SQL Clusters, etc.). Over the next few weeks, I’ll begin to document my attempts to build a Linux-HA system utilizing different setups. The software I’ve tested include DRBD, Gluster, SCST and the GUI interface LCMC. LCMC is a GUI for configuring either Corosync/Pacemaker or Heartbeat/Pacemaker.
I’m a huge fan of Ubuntu Linux and built my systems on a VMWare ESXi host and two small physical servers.
My appraoch will be a modular one. Many of the systems I’ve tried resulted in the re-use of software packages. Instead of just giving you one recipe for each system, I’ll write up instructions for each module and then you can use these as blocks to build your own SAN.
My test SANs were in a virtual environment utilizing Ubuntu as my Linux Distro with the following setups…
From the above, you can see there is a lot of repetition going on. You may be asking why I chose SCST instead of (lets say) IET Target. From my research, IET doesn’t support Persistent Reservations (SCSI-3) which is required for VMware and Windows Clustering.
I’ll cover the pitfalls I ran into in building each module.
One special note I must make. If you decide to build a SAN EVERYONE online insists you implement STONITH (Shot The Other Node in the Head) to prevent data corruption from a split brain situation. As I find time, I’ll work on adding a STONITH setup. The most popular method is to use an APC UPS with a serial connection to remotely turn off the offending node.