I created a exploration vehicle (https://goo.gl/T6YwAz), that is in its first throw sending camera data packed into events of the IoT Hub. It's a very expensive solution and i need a different. WebSockets seem to be the way to go. I can host a web application inside Azure and keep it that way perfectly scalable! After all - to keep the IoT spirit alive - we assume someday everyone in the world to use our device.
Device
The Device connects to a WebSocket enabled server with its DeviceId and sends captured data as a binary message.
...
await mStreamWebSocket.ConnectAsync(new Uri($"{Globals.WEBSOCKET_ENDPOINT}?device={MainPage.GetUniqueDeviceId()}"));
...
var capturedPhoto = await mLowLagCapture.CaptureAsync();
using (var rac = capturedPhoto.Frame.CloneStream())
{
var dr = new DataReader(rac.GetInputStreamAt(0));
var bytes = new byte[rac.Size];
await dr.LoadAsync((uint)rac.Size);
dr.ReadBytes(bytes);
await socket.OutputStream.WriteAsync(bytes.AsBuffer());
}
...
WebServer
Azure requires to first enable Web Sockets in server configuration.
The web server accepts web socket requests, and on the receiver end keeps them in a public collection. Device and PC send a device query parameter, which is the DeviceId.
if (http.WebSockets.IsWebSocketRequest && http.Request.Query.ContainsKey("device"))
{
var deviceid = http.Request.Query["device"].ToString();
var webSocket = await http.WebSockets.AcceptWebSocketAsync();
if (webSocket.State == WebSocketState.Open)
{
var existigsocketconnection = ImageReceiverWebSocketMiddleware.Connections.Where(x => x.DeviceId.Equals(deviceid)).FirstOrDefault();
if (existigsocketconnection != null)
{
ImageReceiverWebSocketMiddleware.Connections.Remove(existigsocketconnection);
}
Connections.Add(new SocketConnections { DeviceId = deviceid, SocketConnection = webSocket });
while (webSocket.State == WebSocketState.Open)
{
var buffer = new ArraySegment<Byte>(new Byte[4096]);
var received = await webSocket.ReceiveAsync(buffer, CancellationToken.None);
switch (received.MessageType)
{
case WebSocketMessageType.Close:
var socket = Connections.Where(x => x.SocketConnection == webSocket).First();
Connections.Remove(socket);
await webSocket.CloseAsync(WebSocketCloseStatus.NormalClosure, "Closed in server by the client", CancellationToken.None);
continue;
}
}
}
}
else
{
await mNext.Invoke(http);
}
The endpoint for sender is similar, with the difference of instead having to send the data to the receiver endpoints.
List<byte> data = new List<byte>(buffer.Take(received.Count));
while (received.EndOfMessage == false)
{
received = await webSocket.ReceiveAsync(buffer, CancellationToken.None);
data.AddRange(buffer.Take(received.Count));
}
var socketconnectionList = ImageReceiverWebSocketMiddleware.Connections.Where(x => x.DeviceId.Equals(deviceid, StringComparison.Ordinal)).ToArray();
foreach (var socketconnection in socketconnectionList)
{
var destsocket = socketconnection.SocketConnection;
if (destsocket.State == System.Net.WebSockets.WebSocketState.Open)
{
var type = WebSocketMessageType.Binary;
try
{
await destsocket.SendAsync(new ArraySegment<byte>(data.ToArray()), type, true, CancellationToken.None);
}
catch (Exception ex)
{
AppInsights.Client.TrackException(ex);
}
}
else
{
AppInsights.Client.TrackTrace("Removing closed connection");
ImageReceiverWebSocketMiddleware.Connections.Remove(socketconnection);
}
}
Device and PC have each an own route mapping, configured inside the Startup.Configure
method.
app.UseWebSockets();
app.Map("/ws/receiver", (sub) =>
{
sub.UseMiddleware<ImageReceiverWebSocketMiddleware>();
});
app.Map("/ws/sender", (sub) =>
{
sub.UseMiddleware<ImageSenderWebSocketMiddleware>();
});
PC
The PC-Client, in my case, a WPF application, connects to server and waits for new data.
string wsUri = $"ws://yourserver.azurewebsites.net/ws/?device={Globals.DEVICE_ID}";
var socket = new ClientWebSocket();
await socket.ConnectAsync(new Uri(wsUri), ct);
Then reads all message parts and finally converts in a WPF compatible object.
var buffer = new ArraySegment<Byte>(new Byte[40960]);
WebSocketReceiveResult rcvResult = await socket.ReceiveAsync(buffer, ct);
string b64 = String.Empty;
if (rcvResult.MessageType == WebSocketMessageType.Binary)
{
List<byte> data = new List<byte>(buffer.Take(rcvResult.Count));
while (rcvResult.EndOfMessage == false)
{
rcvResult = await socket.ReceiveAsync(buffer, CancellationToken.None);
data.AddRange(buffer.Take(rcvResult.Count));
}
MemoryStream ms = new MemoryStream(data.ToArray());
var image = Image.FromStream(ms);
var oldBitmap = new Bitmap(image);
var bitmapSource = System.Windows.Interop.Imaging.CreateBitmapSourceFromHBitmap(
oldBitmap.GetHbitmap(System.Drawing.Color.Transparent),
IntPtr.Zero,
new Int32Rect(0, 0, oldBitmap.Width, oldBitmap.Height),
null);
var del = BitmapAquired;
if (del != null)
{
del(this, bitmapSource);
}
var picturespath = Environment.GetFolderPath(Environment.SpecialFolder.MyPictures);
image.Save(Path.Combine(picturespath, "lastimagefromrover.jpg"));
addToLog(string.Format("Image message received"));
}
And finally:
Oops, stupid surveillance cameras taking pictures in wrong moments. ;-)
RetrospectiveI decided to capture single frames in first version and see what I can learn out of it. Looking back, it seems to be the best choice. Next time I would capture a stream and convert them into single images to hopefully get a better frame rate. Yet, the biggest bottleneck is still the upstream connection to server. Getting something like 20FPS seems right now rather unrealistic.
Comments
Please log in or sign up to comment.