LogoLogo
Home
Products

Products

Explore our suite of powerful, flexible, and scalable products designed to streamline insurance operations and boost your growth in a fast-paced digital environment.

Overview

Core Products

Claims Administration System
Policy Administration System
Policy Billing System
Quote and Bind Platform
Underwriting Workbench
Identity and Access Management

Services

Bespoke Development
Insurance Product Digitisation
Insurance Rating Automation
Legacy Systems And Data Migration

Technology

AuthStack - Identity and Access Management
FeaturesPricing
C2MS - Core Insurance Software
FeaturesPricing
Solutions

Solutions

Discover how our solutions enable Lloyd’s and London Market Syndicates, MGA's, Brokers, and Insurers to streamline operations, enhance underwriting efficiency, and ensure compliance in a rapidly evolving digital landscape.

Overview

Lloyd's - Specialist Insurance and Reinsurance Market

Tailored digital solutions for Lloyd’s market participants, supporting compliance, syndicate collaboration, and delegated authority workflows.

Global Insurance Market

Flexible, cloud-based solutions for insurers, brokers, and MGAs operating worldwide — built to adapt across regions, regulations, and partners.

London Market Brokers

Scalable tech for London brokers to streamline placements, automate workflows, and enhance client engagement across specialty markets.

London Market Carriers

Integrated insurance solutions designed for carriers operating in the London Market; enabling agility, compliance, and operational control.

Brokers & MGA's

Support for brokers and MGAs with end-to-end technology to manage submissions, bind risks, and deliver superior service at scale.

Insurers

Comprehensive digital insurance solutions that combine core systems, underwriting tools, and automation to drive efficiency and innovation.

Partners

Resources

Explore expert insights, tools and support to uncover how Buckhill can help you enhance your business.

Latest News and Updates

Read more on the latest news and updates on Buckhill, our growing network of partners and clients, and exciting industry news.

Case studies

Discover what makes Buckhill the preferred choice for our clients by exploring our case studies.

Product Updates

Stay up to date on Buckhill's Product offerings and enhancements that can drive your business success.

e-Articles

Building and Operating Compliant Systems
Enhancing Customer Experience
Gaining Competitive Advantage
Increasing Operational Efficiency and Agility
Increasing Sales and Distribution Efficiency
Boosting Customer Retention and Engagement
Successfully Migrating from Legacy Systems
Improving Risk Management and Underwriting

About

Get to know more about Buckhill, where we show how committed we are to mutual success and sustainability.

Company

Read more on what makes Buckhill the valued and sustainable company it is today.

About Company
Buckhill Impact
Environmental ImpactGovernance ImpactSocial Impact
ISO Projects

Legal

Explore our Legal data and tools for added security and compliancy.

Domain Purchase and Renewal Policy
Terms and conditions
Corporate Data
Cookies and Privacy Policy
Complaints

Careers

Explore Buckhill's hard-working departments and career opportunities; covering open positions, departmental values and our range of competitive benefits.

Departments and Positions
Hiring Process
Relocation to Croatia
Contact us
LogoLogo
Logo

Home

Products

Products Overview

Core Products

Core Products Overview

Claims Administration System

Policy Administration System

Policy Billing System

Quote and Bind Platform

Underwriting Workbench

Identity and Access Management

Services

Services Overview

Bespoke Development

Insurance Product Digitisation

Insurance Rating Automation

Legacy Systems And Data Migration

Technology

Technology Overview

AuthStack - Identity and Access Management

AuthStack - Identity and Access Management Overview

Features

Pricing

C2MS - Core Insurance Software

C2MS - Core Insurance Software Overview

Features

Pricing

Solutions

Solutions Overview

Lloyd's - Specialist Insurance and Reinsurance Market

Global Insurance Market

London Market Brokers

London Market Carriers

Brokers & MGA's

Insurers

Partners

Resources

Latest News and Updates

Case studies

Product Updates

e-Articles

Building and Operating Compliant Systems

Enhancing Customer Experience

Gaining Competitive Advantage

Increasing Operational Efficiency and Agility

Increasing Sales and Distribution Efficiency

Boosting Customer Retention and Engagement

Successfully Migrating from Legacy Systems

Improving Risk Management and Underwriting

About

Company

About Company

Buckhill Impact

Buckhill Impact Overview

Environmental Impact

Governance Impact

Social Impact

ISO Projects

Legal

Domain Purchase and Renewal Policy

Terms and conditions

Corporate Data

Cookies and Privacy Policy

Complaints

Careers

Careers Overview

Departments and Positions

Hiring Process

Relocation to Croatia

Contact us

Product Updates

How to enable broadcast and multicast support on Amazon (AWS) EC2

January 19th, 2016

11 minutes to read

This article explains how to enable broadcast and multicast support on Amazon (AWS) EC2, which is required for certain enterprise applications and protocols.

Contents

  1. Introduction

  2. Software used

  3. Installation guide

  4. Post installation instructions

  5. Tests and performance

  6. Transfer of one large file to establish consistent throughput

  7. ZMQ Performance, throughput test

  8. OpenVPN, without encryption

  9. TIPC - An example of multicast and broadcast in action using n2n

  10. Summary

Introduction

Updated Aug 2018

Broadcast and multicast support is a requirement of many enterprise applications and clustering stacks. However, in our experience, the choice of hosting providers who can effectively execute these networking features is somewhat limited. This means that more expensive co-location is required. It is much the same situation with many cloud providers, who do not provide broadcast and multicast support. Currently, AWS do not officially support broadcast or multicast out of the box, so the only solution if you require instant results is to use another cloud provider, such as Rackspace.

However, as AWS certified consultants, we wanted broadcast and multicast support within the AWS EC2 network as we currently utilise other AWS products and services and wanted to engineer our way out of the problem, so we set about creating our own solution.

Firstly, we're using AWS VPC (Virtual private cloud) instances (any size is supported). To enable Layer 2 communication over the network we are using n2n Peer-to-Peer VPN software which is bound to the internal Ethernet adapter of the AWS VPC instance. As n2n is a secure VPN server it ships by default with encryption and compression. We removed the encryption and compression as it's not required over a AWS VPC connection (Read more about VPC security). For this demonstration we created a VPC with one subnet, in one availability zone. We used four "m1.small" EM2 instances with public IP addresses. Do not forget adjust Security group(s) to only allow access from the internal network, which in our example is: 192.168.100.0/16

Network configuration of VPC Instances

Server name

Role

IP Address

VPN IP Address

test1

supernode

192.168.100.1

none

test2

VPN client

192.168.100.2

192.168.1.2

test3

VPN client

192.168.100.3

192.168.1.3

test4

VPN client

192.168.100.4

192.168.1.4

Software used

  • Operating system: Ubuntu (16.04 LTS)

  • n2n Peer-to-Peer VPN

  • ZeroMQ

Installation Guide

Although n2n is available via the official Ubuntu repository, we will instead download the source code, as small tweaks have to be made to disable compression and encryption. In order to checkout the source code, git is required. Once git is installed, proceed to the commands below.

You can use your favorite text editor, but for this demonstration we're using vi, you may substitute with nano or any other

root@test1 $ git checkout https://github.com/ntop/n2n.git . root@test1 $ cd n2n/n2n_v2

Disable encryption

root@test1 $ vi MakeFile

Search for

N2N_OPTION_AES

and change it to

N2N_OPTION_AES=no

Now disable compression

root@test1 $ vi n2n.h

Search for

define N2N_COMPRESSION_ENABLED 1

Change it to

define N2N_COMPRESSION_ENABLED 0

Once the files have been edited, we need to compile n2n by typing the following command from within the n2n folder

root@test1 $ make

Once make has been successfully run, you will see two binaries, supernode and edge.

supernode is only required on the supernode server, and edge is required on the others.

Compile n2n on every test server in the same way, or simply copy the edge binary to each server.

The supernode which is responsible for introducing new VPN clients and the edge is used to connect. The VPN clients initially discover each other by using the supernode, but after a connection has been established they never directly communicate through the supernode. This allows us to avoid network bottlenecks and to also mitigate the single point of failure which is present in a star architecture VPN network. If the supernode were to fail, the VPN would continue to function, but no new VPN nodes would be able to register. If we wanted a hot standby for redundancy then another EC2 instance with supernode configured, can be started up using standard AWS failover over procedures.

Post Installation Instructions

On server test1, start supernode

root@test1 $ sudo ./supernode -l 1200

If you haven't blocked access from public Internet traffic to the supernode server, you can achieve this with Iptables by executing the following command

root@test1 $ sudo iptables -I INPUT ! -s 192.168.100.0/16 -m udp -p udp --dport 1200 -j DROP

Now on server test2, start the VPN client

root@test2 $ sudo ./edge -l 192.168.100.1:1200 -c Buckhill -a 192.168.1.2

Check interface by typing

root@test2 $ ifconfig edge0

You should see the following output

edge0 Link encap:Ethernet HWaddr c6:9b:6f:bf:cb:49 inet addr:192.168.1.2 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::c49b:6fff:febf:cb49/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1400 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:5 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:0 (0.0 B) TX bytes:418 (418.0 B)

On server test3, start the VPN client

root@test3 $ sudo ./edge -l 192.168.100.1:1200 -c Buckhill -a 192.168.1.3

Now check VPN connectivity on server test3 by typing

root@test3 $ ping -c 1 192.168.1.2

The results should look like this

PING 192.168.1.2 (192.168.1.2) 56(84) bytes of data. 64 bytes from 192.168.1.2: icmp_seq=1 ttl=64 time=0.906 ms --- 192.168.1.2 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.906/0.906/0.906/0.000 ms

You can now connect as many nodes as required.

Testing and performance

Depending on your network requirements it's important to establish the throughput and overhead created by operating over a VPN. In order to determine this we performed two type of tests, between two nodes over the newly created VPN network.

Transfer of one large file to establish consistent throughput

On test2 server netcat was started in listen mode

root@test2 $ nc -l 5001 > /dev/null

On test3 server the 1GB file was created and sent over the network to test2

root@test3 $ dd if=/dev/urandom bs=1M count=1000 of=/tmp/big_file

The results look like this

1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 309.952 s, 3.4 MB/s

root@test3 $ cat if=/tmp/big_file |pv|nc 192.168.1.2 5001 1e+03MB 0:00:25 [38.7MB/s] [ ]


If you wish to re-produce our ZeroMQ demonstrations shown below, you will need to install ZeroMQ 4.1 with TIPC support. Due to the complexity of this process, it is outside the scope of this article.

ZMQ Performance, throughput test

The tests used by us are standard ZMQ tests which come built in with libzmq

On server test2 start receiver

root@test2 $ ./local_thr tcp://*:5555 4096 100000

For this test we used a 4KB message size, which is bigger than the average request in our own applications. It is above the MTU of the network adapter.

On server test3, start the test

root@test3 $ ./remote_thr tcp://192.168.100.2:5555 4096 100000

Reference: http://zeromq.org/results:perf-howto

In order to compare results with another VPN service or product, we performed tests using OpenVPN in a Point 2 Point topology.

OpenVPN, without encryption

From server test2

root@test2 $ sudo openvpn --cipher none --proto udp --dev tun --comp-lzo --auth none --prng none --mode p2p --ifconfig 192.168.1.2 192.168.1.3 --port 1194

From server test3

root@test3 $ sudo openvpn --cipher none --proto udp --dev tun --comp-lzo --auth none --prng none --mode p2p --ifconfig 192.168.1.3 192.168.1.2 --remote 192.168.100.2 1194

OpenVPN without encryption and compression

From server test 2

root@test2 $ sudo openvpn --cipher none --proto udp --dev tun --auth none --prng none --mode p2p --ifconfig 192.168.1.2 192.168.1.3 --port 1194

From server test3

root@test3 $ sudo openvpn --cipher none --proto udp --dev tun --auth none --prng none --mode p2p --ifconfig 192.168.1.3 192.168.1.2 --remote 192.168.100.2 1194

Test results

Legend

Name in graph

Description

eth

Ethernet device

n2n

n2n vpn with encryption and compression enabled

n2n -e

n2n vpn with encryption disabled

n2n -e -c

n2n vpn with encryption and compression disabled

OVPN -e

OpenVPN with encryption disabled

OVPN -e -c

OpenVPN with encryption and compression disabled

Transfer rate, large file

Transfer rate

ZeroMQ messages per second

ZeroMQ messages per second

ZeroMQ bandwidth test

ZeroMQ bandwidth test

Messages per second, based on message size

Message Size

We also performed a ZMQ benchmark of the throughput (messages per second), in relation to the messages size.

If the message fits into n2n's MTU, which is 1400, it will not be fragmented. Usually request headers are below 1KB in size, so a higher amount of messages per second can be achieved compared to a 4KB request.

Another conclusion we've drawn is that bandwidth maximum capacity doesn't differ more than 10% in relation to the message size.

An example of multicast and broadcast in action using n2n and TIPC

Our applications make use of the TIPC protocol, which operates on top of Layer 2 packet networks. Using Amazon's EC2 we can't access that network layer. The following demonstrates how a customised n2n VPN solves this problem.

TIPC reference:http://tipc.sourceforge.net

Configuring TIPC nodes is taken from http://hintjens.com/blog:71

For this demonstration we needed one extra server, called test4, which is also a member of the n2n VPN network

Our TIPC configuration

root@test2 $ sudo modprobe tipc && sudo tipc-config -a=1.1.2 -be=eth:edge0

root@test3 $ sudo modprobe tipc && sudo tipc-config -a=1.1.3 -be=eth:edge0

root@test4 $ sudo modprobe tipc && sudo tipc-config -a=1.1.4 -be=eth:edge0

The multicast_demo from TIPC utilities is used for the following demonstration

On servers test2 and test3 run the listener

root@test2 $ ./multicast_demo/server_tipc Server: port {18888,300,399} created Server: port {18888,200,299} created Server: port {18888,100,199} created Server: port {18888,0,99} created

root@test3 $ ./multicast_demo/server_tipc Server: port {18888,300,399} created Server: port {18888,200,299} created Server: port {18888,100,199} created Server: port {18888,0,99} created

On the fourth server a message is sent over multicast

root@test4 $ sudo ./multicast_demo/client_tipc ****** TIPC client multicast demo started ****** Client: sending message to {18888,99,100} Client: sending message to {18888,150,250} Client: sending message to {18888,200,399} Client: sending message to {18888,0,399} Client: sending termination message to {18888,0,399} ****** TIPC client multicast demo finished ******

Both servers (test2 and test3) receive the message and return output which looks like this

Server: port {18888,200,299} received: message to {18888,150,250} Server: port {18888,100,199} received: message to {18888,99,100} Server: port {18888,100,199} received: message to {18888,150,250} Server: port {18888,0,99} received: message to {18888,99,100} Server: port {18888,300,399} received: message to {18888,200,399} Server: port {18888,300,399} received: message to {18888,0,399} Server: port {18888,300,399} terminated Server: port {18888,200,299} received: message to {18888,200,399} Server: port {18888,200,299} received: message to {18888,0,399} Server: port {18888,200,299} terminated Server: port {18888,100,199} received: message to {18888,0,399} Server: port {18888,100,199} terminated Server: port {18888,0,99} received: message to {18888,0,399} Server: port {18888,0,99} terminated

Summary

Success, a customised n2n VPN set up can be used to effectively operate broadcast and multicast over the AWS EC2 network.

While the average throughput is less than the standard ethernet interface with no VPN for both file transfer and ZMQ, n2n is still comparable with OpenVPN, considered to be a very fast point to point VPN server, but with the added benefit of running a VPN cluster.

Depending on the packet size, we can send between 3000 and 35000 messages per second over ZMQ, and over 80Mbit/second throughput on a small instance.

If more throughput is required, larger instances can be used.

To conclude this article, depending on your exact requirements, using a customised n2n unlocks broadcast and multicast over the EC2 network. If you require more messages per second than you can achieve with EC2, then we recommend Rackspace performance cloud instances which are more expensive but officially support broadcast and multicast.

For our own requirements, the messages per second achieved with n2n provides plenty of excess capacity.

If you have any feedback please comment below.

Na

Natasha Rajak

Share

awsec2amazoncloudnetworkingVPC

Latest

Buckhill Proudly Joins the Market People Awards as a Gold Sp...

From Inbox to Policy: AI-Powered MRC and Email Ingestion Exp...

Global RADAR Integration: Real-Time Sanctions Screening with...

Buckhill Partners with Chimnie to Enhance Property Data for ...

United Kingdom

Buckhill Ltd
Lloyd's of London, Room 897

One Lime Street

London, EC3M 7HA

England

Croatia

Buckhill d.d.
Remetinečka cesta 13

10000

Zagreb

Phone: +385(0)1 4663719

Products

Claims Administration SystemPolicy Administration SystemPolicy Billing SystemQuote and Bind PlatformUnderwriting WorkbenchIdentity and Access Management

Company

About usCareers

Services

Bespoke DevelopmentInsurance Product DigitizationInsurance Rating AutomationLegacy Systems And Data Migration

Technology

C2MS - Core Insurance SoftwareAuthStack - Identity and Access Management

Contact us 8am - 6pm GMT (Mon-Fri)

Sales:

+44(0)208 1919 438

Support:

+44(0)208 1919 438

Existing customers, request support & assistance.

Buckhill®, C2MS® and AuthStack® are registered trademarks of Buckhill Ltd.

Domain Name PolicyCookies and Privacy PolicyTerms of serviceReset cookies settings

Loading form...

Cookie Settings

We use essential cookies to make this website work.

We would like to set additional cookies to understand how you interact with our website, remember your preferences and enhance our services.

For even more information, please refer to our Cookie Policy.

Cookies that are strictly necessary for our website to function correctly. They enable you to interact and access essential features of our website.

Cookies that enable our website to provide improved functionality and personalisation by remembering a user’s choice about cookies on our website.

Cookies installed by Google Analytics, Apollo and Facebook that enable the analysis of how visitors use our website. This information will be used for creating reports of our websites’performance.

Cookies installed by Google Universal Analytics that regulate request rates, limiting data collection during periods of high traffic.