Problems with WordPress Like Plugin and Polylang

Spread the love
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  

Spread the love          Sometimes, drugs interact with each other, and the same happens to WordPress Plugins. The following example demonstrates one of such interactions. Both Polylang and Facebook Like plugins work completely as expected on themselves, however, when used together somethings get broken. Facebook Like plugin does somewhat what it’s name implies: add the facebook button to your wordpress blog. It’s also possible to share the page directly to facebook. Both features work flawlessly on its own. Polylang it’s one of most useful plugins you can find if you need to post stuff on two or more languages. It checks the configured language on the browser, with the language set for each post, and displays the post accordingly. Simple and it works. However, when when using a translated post with facebook like, things brake down. Facebook uses a script to check access to the shared pages and try to identify eventual images to show them on Facebook, this is known as facebookexternalhit (useragent facebookexternalhit/1.0). Now, this script does not sets the language, or it sets the language wrong, Polylang has not means to set the correct version. All this results that when people tries to share a page directly on the blog, sometimes …

The state of openID third party login integration on WordPress

Spread the love
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  

Spread the love          Last week a friend of mine asked me why count’t I simply do the same authentication scheme as on Blogger? From a user’s point of view, it made some sense: why have one more login, in as many sites. Well, from a security point of view, it may also make sense. On the aftermath of the Linkedinscandal, where around 9 million hashed passwords were disclosed, and about 2 million passwords were recovered, I wouldn’t feel confortable having another unprotected database containing sensitive user data, even if that database was mine. It’s far better to have Google, Facebook or Microsoft handle it, and in a terror scenario, let them have the fall. To make things even better, this is an Internet standard being developed known as openID, which would make different authentication processes interoperable. And it mostly works, for those sites which support this, which at this moment are mainly Google and Yahoo. Other such as Microsoft, and mainly haven’t yet seen the light (of standards usage, but this is not uncommon to Microsoft). At the end, and after a few tests on different plugins, I rested on Social Connect. Let’s see how it goes. This plugin integrates not …

Upgrading the hard drive on old Macbook Pro (and upgrading to Lion)

Spread the love
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  

Spread the love          My old 13″ Macbook Pro’s 250GB is at it’s limit. It’s full and slow, and it is running Snow Leopard. So it was time to change it all. The old disk was a 250GB Hitachi HTS542525K9SA00 with a 5400RPM rotational speed, the new, a Seagate Momentus 750GB 7200RPM beast. The Plan? 1. Get the new hard drive onto a usb case 2. Install Snow Leopard on it 3. Install Lion on the usb drive 4. Check if all the applications are ok 5. copy all the data from the old drive to the new one (*) 6. Install the new drive into the mack book (*) why not use the migration tools? Simple. Sometimes we need a clean start, in order to make sure the new system is as untainted as possible. The results: This was the poor state of my old hard drive:                             Then, the new hard drive, with Lion through the USB case:   And voila, even from a bottlenecked environment such as a USB case, the new drive starts to shine. It’s 10% faster on writes and 40% faster on reads. Although …

Problems of replicating Virtual Machine

Spread the love
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  

Spread the love          One of the biggest advantages of VMs, is the capability of cloning and replicating them. This allows the creation of a number of similar systems, without having to replicate the configuration and installation time. Unfortunately , there are a small number of downsides: The MAC addresses are also cloned. Remember to generate new MAC addresses on each new cloned VM. Also, Ubunto caches this value, which will generate a typical error message on dmesg: “udev: renamed network interface eth0 to eth1” . To avoid this problem, delete the the file /etc/udev/rules.d/70-persistent-net.rules and restart the system.

How to manage a small Virtual Machine infrastructure

Spread the love
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  

Spread the love          Over the last months, I felt the need to start a small virtual machine infrastructure to manage every small need on a laboratory at work. The set up infrastructure is the following: 1x Apple Macbook Pro 4.1 – for development of the appliances 2x HP DL360 with 8GB RAM running Ubuntu 10.4 – to run the VMs Note that is this NOT a production environment. This only covers a LAB needs and requirements, so stuff like redundancy and fail safe will not be included.   Over the next few weeks, I’ll add some important notes.   Workflow The workflow is quite strait forward: set up the VM on the Macbook running Oracle Virtual Box; Deploy the VM on the HP server running Linux KVM To go from one step to the other, conversion between formats is required. We’ll cover that later.

Recover data from a non booting Macbook Air (OSX 10.6 and 10.7, and probably 10.8)

Spread the love
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  

Spread the love          The Macbook Air is the best tool for someone whose main function is to attend to meetings. Although the new (2011) models all feature SSDs, the older models only offered those as an option. As such, even the best build machines are not free from hard drive failures, which remain as the component with highest failing component. Regardless, in case you get stuck with a non booting Macbook Air,  here’s how to recover the data: 1. First, why does this even works? Simple. OSX uses a kind of file system called “Journaled“, which is far different from FAT or NTFS, but is quite similar to ReiserFS, XFS or most UNIX file systems around. The advantage? Consistency is (almost) allways assured, even in case of a power failure, or hard drive failure. So, even if a hard drive is having problems, it’s usually capable of giving up it’s data. 2. DON’T TRY TO BOOT. If OSX fails to boot 3 times, don’t try the forth. One of the most interesting feature of a UNIX system, is the ability to boot WITHOUT WRITING to the hard drive. This allows for a failing hard drive not to do even more damage …

Check opened files by a process on OSX

Spread the love
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  

Spread the love          Recently i’ve been fighting some performance issues with Spotlight on OSX 10.6. As sad as it may seem, the problem is not on Spotlight it selft, but on it’s reliance on third parties to search inside files. On my case, the issue is related to Office 2010 Exel files: Jul 26 10:52:51 macbook-2 mdworker32[18995]: (Normal) Import: Spotlight giving up on importing file /Users/xxx/Library/Mail/Mailboxes/xxx/xxx.mbox/Attachments/156147/2/_excel_filename_.xlsx after 240 seconds, 235.697 seconds of which was spent in the Spotlight importer plugin. Which makes Spotlight take DAYS for index all my files and emails. Regardless, the way to ckeck what Sportlight is actually doing is quite simple. On the console, type: sudo opensnoop -n mdworker In the meantime, I’ll try to get Spotlight out of Microsoft’s bugs…

Generating random files

Spread the love
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  

Spread the love          Why on earth would someone want to generate files of random content (not files with random name) ? Well, there is one big reason to do it: generate incompressible files. This seems a small reason, but there are a number os usage scenarios (apart from proving that random content is incompressible), most focus on transmitting files. Although is transparent to most people, but some tools do background compression namely, https, IPSEC and SSL VPNs, etc, and as such, trying to measure real world performance on those require incompressible content. First, how to generate it (assuming you can talk *NIX) ? dd if=/dev/urandom of=random.file bs=1m count=100 Where: if – Input file, in this case the virtual file /dev/urandom of – Output file, the name of the destination file bs – Block Size, the default block size for dd is 512bytes, which makes sense when copying files, but not terribly useful when creating files with a determined size. In this case 1MB count – number of blocks to be copied In this case, I needed to create a 100MB file of random content. The result: > dd if=/dev/urandom of=random.file bs=1m count=100 100+0 records in 100+0 records out 104857600 bytes transferred …