Problems of replicating Virtual Machine

Spread the love
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  

One of the biggest advantages of VMs, is the capability of cloning and replicating them. This allows the creation of a number of similar systems, without having to replicate the configuration and installation time. Unfortunately , there are a small number of downsides: The MAC addresses are also cloned. Remember to generate new MAC addresses on each new cloned VM. Also, Ubunto caches this value, which will generate a typical error message on dmesg: “udev: renamed network interface eth0 to eth1” . To avoid this problem, delete the the file /etc/udev/rules.d/70-persistent-net.rules and restart the system.

  •  
    130
    Shares
  • 130
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
Advertisements

How to manage a small Virtual Machine infrastructure

Spread the love
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  

Over the last months, I felt the need to start a small virtual machine infrastructure to manage every small need on a laboratory at work. The set up infrastructure is the following: 1x Apple Macbook Pro 4.1 – for development of the appliances 2x HP DL360 with 8GB RAM running Ubuntu 10.4 – to run the VMs Note that is this NOT a production environment. This only covers a LAB needs and requirements, so stuff like redundancy and fail safe will not be included.   Over the next few weeks, I’ll add some important notes.   Workflow The workflow is quite strait forward: set up the VM on the Macbook running Oracle Virtual Box; Deploy the VM on the HP server running Linux KVM To go from one step to the other, conversion between formats is required. We’ll cover that later.

Advertisements

Recover data from a non booting Macbook Air (OSX 10.6 and 10.7, and probably 10.8)

Spread the love
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  

The Macbook Air is the best tool for someone whose main function is to attend to meetings. Although the new (2011) models all feature SSDs, the older models only offered those as an option. As such, even the best build machines are not free from hard drive failures, which remain as the component with highest failing component. Regardless, in case you get stuck with a non booting Macbook Air,  here’s how to recover the data: 1. First, why does this even works? Simple. OSX uses a kind of file system called “Journaled“, which is far different from FAT or NTFS, but is quite similar to ReiserFS, XFS or most UNIX file systems around. The advantage? Consistency is (almost) allways assured, even in case of a power failure, or hard drive failure. So, even if a hard drive is having problems, it’s usually capable of giving up it’s data. 2. DON’T TRY TO BOOT. If OSX fails to boot 3 times, don’t try the forth. One of the most interesting feature of a UNIX system, is the ability to boot WITHOUT WRITING to the hard drive. This allows for a failing hard drive not to do even more damage to it’s …

Check opened files by a process on OSX

Spread the love
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  

Recently i’ve been fighting some performance issues with Spotlight on OSX 10.6. As sad as it may seem, the problem is not on Spotlight it selft, but on it’s reliance on third parties to search inside files. On my case, the issue is related to Office 2010 Exel files: Jul 26 10:52:51 macbook-2 mdworker32[18995]: (Normal) Import: Spotlight giving up on importing file /Users/xxx/Library/Mail/Mailboxes/xxx/xxx.mbox/Attachments/156147/2/_excel_filename_.xlsx after 240 seconds, 235.697 seconds of which was spent in the Spotlight importer plugin. Which makes Spotlight take DAYS for index all my files and emails. Regardless, the way to ckeck what Sportlight is actually doing is quite simple. On the console, type: sudo opensnoop -n mdworker In the meantime, I’ll try to get Spotlight out of Microsoft’s bugs…

Comments and registration now available!

Spread the love
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  

Hi all, I’m very sorry for the interruption on the continuous flow of posts I was managing up until September, but as everyone, when the workload increases, non-essential stuff start to get set aside. Regardless, this projects hasn’t ended, and the flow of new visitors has now got to a stable level. As such, registration for new users and comments are now open to all. Feel free to comment on anything you don’t agree or any other stuff you find relevant.

Advertisements

Generating random files

Spread the love
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  

Why on earth would someone want to generate files of random content (not files with random name) ? Well, there is one big reason to do it: generate incompressible files. This seems a small reason, but there are a number os usage scenarios (apart from proving that random content is incompressible), most focus on transmitting files. Although is transparent to most people, but some tools do background compression namely, https, IPSEC and SSL VPNs, etc, and as such, trying to measure real world performance on those require incompressible content. First, how to generate it (assuming you can talk *NIX) ? dd if=/dev/urandom of=random.file bs=1m count=100 Where: if – Input file, in this case the virtual file /dev/urandom of – Output file, the name of the destination file bs – Block Size, the default block size for dd is 512bytes, which makes sense when copying files, but not terribly useful when creating files with a determined size. In this case 1MB count – number of blocks to be copied In this case, I needed to create a 100MB file of random content. The result: > dd if=/dev/urandom of=random.file bs=1m count=100 100+0 records in 100+0 records out 104857600 bytes transferred in 13.460029 …