Category Archives: Deneb impliments

Deneb has implimented

Router woes

I have an old WRN150n router at home, it old the web GUI stinks and it struggles with my 20mb/s Internet connection. I decided that a refresh was in order.

Wish list

  • IPv6
  • open VPN tunnels
  • performance
  • nice GUI
  • low power (sub 10 watts)

I tried upgrading my old router to DDWRT firmware.

This is a relatively simple if time consuming practice of doing a 30-30-30 reset on the router. Yes that’s effectively switch it off and on again. Then upload the new firmware, wait and do another 30-30-30.

Once I had upgraded I came across my first problem.

The wan (Internet) interface setting was for some inexplicable reason bound to the wireless?

After figuring out that eth2 was wireless, eth1 should be wan and eth0 was the LAN switch I could get I online!

Performance was great over wires! but my second issue arose.

Wifi issues on the router

The wireless configuration in DDWRT is awful and it doesn’t work. I won’t bore with the complete list of vairiables I meticulously changed one by one surfice to say it was enough to convince me to roll the firmware back.

Cisco router factor

My next problem, Cisco who bought Linksys has dropped support for my trusty old router and that includes dropping the download of the firmware.

I eventually found it but it shouldn’t be this hard!

The hunt for a new router

I started the hunt for a replacement……
That’s for another post another time.

Socks proxy through an ssh tunnel

I have on several occasions found myself on an insecure network (Such as ‘free’ wifi in a restaurant ) and wanted to browse the web securely. There are many ways to achieve this aim this is one of the quickest, assuming you have a server to SSH too.

Principle of a Socks proxy through a ssh tunnel

SSH to a system you administer/own and trust; This creates the secure connection (Make sure you know your fingerprints). Then tunnel your web traffic over this connection using a Socks proxy.

Continue reading Socks proxy through an ssh tunnel

Multi tier architecture in action

What is meant by a Multi tier architecture

The generally accepted definition of a multi tier architecture describes the separation of Presentation, Logic and Data roles. This can be viewed in more detail in the wikipedia article Multitier architecture  however this article doesn’t describe the implementation of a multi tier architecture.

I have worked with many different organisations of varying sizes with differing needs and here are 4 of the most common high level implementations I come across.

Continue reading Multi tier architecture in action

Google Apps for business

I read an interesting article on The Register that as an Infrastructure Administrator I don’t agree with.

The article is based upon survey results from large blue chip firms in the UK and they effectively responded  that they didn’t see Google’s services being ready for large business.

Deneb has been using Google Apps for business for 18 months and my current full time employer uses Google Apps for business for a reasonable scale organization so I believe I have enough experience with the product and service to tell you that there is some smoke an mirrors going on with these respondents.

I am sure that some truly believe that Google Apps are not ready for  business however there are some that are replying no to keep the business advantage to themselves.

The best things about Google Apps for business

  • The cost a few dollars per users per month
  • E-Mail server management is outsourced hardware and software
  • Spam and anti virus for your E-Mail is based upon heuristics for millions of accounts
  • On-line office application that has enough features to use daily without the extra licensing $ (or £ here in the UK)
  •  Collaboration is simple, live and encouraged. It’s amasing to work on a document live with someone in another country being able to see what they type where integrated with Google chat to discuss the changes being made.
  • Single sign on to other Google Apps and services. One Google account can get you into the web master tools, the E-Mail administration, the blog admin, the youtube account ect…..
  • Two factor authentication is a simple switch per user so your accounts jump in security

There are lots more cool features but only so much room to blog. If your company wants to look at ditching its old crufty mail and office solution in return for an always on the Internet solution Deneb can help you just E-Mail contact@deneb.co.uk to discuss.

Tiny core Linux

A quick post about a tiny version of Linux. I was recently introduced to Tiny Core Linux because I had an old machine that needed a very light weight operating system. the download shows how small a fully functional OS can be.

Core starts at 8 MB and it only includes a command line interface, I recommend downloading the full on 64MB Core Plus edition as this includes drivers for wireless and runs pretty well on my old Dell Laptop.

As for performance you are in for a treat. On my old Dell that is 9 years old it boots from disk in less time that the BIOS takes to check the RAM. Once in the system runs using less than 70MB of RAM! On my Asus Eee PC  it runs a treat (once I installed Tiny Core Linux instead of Windows 7 Starter)

The downside of this performance is productivity.The Core Plus install does have a GUI but it doesn’t come with any software installed; that is all done post boot and off the Internet via a built in app installation tool.

You can customize the image to include software but it will be loaded into RAM at boot if you do and then degrade the performance.

Overall I like the OS as a Linux install from an old memory stick. Its quick and as long as you have an Internet connection its usable as an emergency tool or on an old slow PC.

 

Cassandra cluster

It’s live and it’s quick!

I helped to implement a Cassandra cluster inside an Amazon VPC earlier this year. It worked fine if a bit slow. We tried increasing the number of nodes (Scale out) and we tried larger nodes (Scale up) along with tuning Cassandra and the application. In the end we could make the application go a bit faster but the connection to the Cassandra cluster seemed to be the limiting factor. The decision was made for speed reasons to get the Cassandra nodes into our rack so that we had a fast link between the application server and the Cassandra Cluster.

We had loads of fun testing configuration options to get a well balanced specification of server while fitting inside the power and financial budget.

We had a specialist from Acunu for a quick training session with us to ensure we have a good grasp of the pretty specialist requirements of Cassandra; its tuning; Maintenance and the underlying technology. This helped us to come to understand the magic triad that needs to be balanced to run a node effectively.

The Triad of per node balance.

Memory – Minimum of 8GB of RAM, 4GB for Java heap and 4 GB of Cache. The Maximum (not a hard limit as you will see) is 16GB of RAM, 8GB for the Java heap and 8GB for the Cache. The Maximum is 16GB because if the heap is too large then the full Java Garbage Collection which pauses the all process will take too long and the node will fall out of the cluster. In our experience a Dual socket Quad core system (8 Hardware threads) can pause for upwards of 10 seconds in a production machine doing a full garbage collection of 8GB of RAM.

Processor – Quad core is a minimum. It mainly effects the garbage collection and SSTables compaction times but these are what will cause your cluster to do funny things like drop a node for a few minutes.

Disk – Disks are a must not SSD’s. Under the hood Cassandra does all of its writing to disk in a serial fashion which is optimized for traditional spinning media. The commit log should have its own disk as it is effectively being written to continuously and this is why an SSD will not last long. The data disk needs high throughput as well as fast access this will allow the Memtables to be flushed to disk as quickly as possible reducing the impact on other actions.

In the end we bought systems with the following specs

  •  1 * 6 core (12 including hyper threadding) Xeon processor running at 2.0 Ghz
  • 16GB of Ram (Quad channel DDR3)
  • 1 OS disk 7,200 RPM
  • 1 Commit log 10,000 RPM disk
  • 3 Data disks in a hardware stripe 10,000 RPM disks

Each node runs at less than 300watts peak and Cassandra shows an avg write latency of 2.2ms!

The performance increase of moving the cluster closer to the application has been huge. One report that would take a minute upwards in AWS (Returning a couple of hundred megabytes of data) now runs in under 10 seconds when the Cassandra system is cold and under 2 when the Cassandra cluster has had time to warm up the Cache.

The one thing that is most important to remember with Cassandra is that Cassandra is a write bias data store. This means that Writes are very fast, generally faster than retrieval so you have to think differently when writing an application that uses Cassandra. It is better to write a new value that overwrites an old one than it is to load an old value then update it with the new value. Get the data model right and the use case for Cassandra will guide you to a new world of highly available fast data storage. If your workload is read biased then Cassandra may not be for you.