• The Gene An Intimate History

    The Gene An Intimate History is a moving account of the discovery and development of genetics, the science of inheritance and chemical basis that give living things their forms and functions. The book is also the author’s deeply personal story of how genetics linked diceases burdened his larger family.

  • Implement shapenet face landmark detection in Tensorflow

    In my previous post on building face landmark detection model, the Shapenet paper was implemented in Pytorch. With Pytorch, however, to run the model on mobile requires converting it to Caffe. Though there is tool to take care of that, some operations are not supported and in the case of Shapenet, it was not something I know how to fix yet. Turn out it was simpler to just re-implement Shapenet in Tensorflow and then convert it to Tensorflow Lite.

  • Viettel's mobile money and the threat to banks

    In a recent conversation, my dear friend made the claim that once telecomunication providers like Viettel and VNPT have their mobile money licences approved, they would take over banks as providers of financial service providers. Banks would inevitably be sidelined. I frowned at the notion of an incoming onslaught of telcos as mobile money as a concept is not new. If one looks at it as the transfer and storage of monetary value via cellphone, that concept can be traced back to the era of feature phones. Back then, phone subscription owners could top up others’ mobile numbers with their existing mobile balances. Or services like ringtone, quizzes can charge users via mean of SMS. Thus mobile money has been in limited usage for long. And a few years ago, mobile wallets (e-wallets like ZaloPay, Momo…), smartphone applications that facilitate payment and money transfer via phone, already began their costly quests to acquire users. Then, what’s so big a deal about this mobile money development?

  • Why the Libra was created

    The day before yesterday, Facebook annouced its new currency, the Libra as part of its effort to break into the payment market. Trying to eat the payment cake has been a long anticipated move for Facebook, since its Chinese social messaging counter part, WeChat, has demonstrated how widespread mobile payment can be. But unlike, WeChat, Facebook does not only provide a payment service, but it goes as far as to create a blockchain based currency. Why the trouble?

  • Use MobileNetV2 as feature extractor in Tensorflow

    Applying machine learning in image processing tasks sometimes feel like toying with Lego blocks. One base block to extract feature vectors from images, another block to classify… Popular choices of feature extractors are MobileNet, ResNet, Inception. And as with any other engineering problem, choosing a feature extractor is about considering trade-offs between speed, accuracy, and size. For my current task of dealing with ML on mobile devices, MobileNetV2 seem to be a good fit as it is fast, quantization friendly and does not sacrifice too much of accuracy. Tensorflow provides a reference implementation of MobileNetV2 that makes using it much easier.

  • Thoughts on Yolo digital bank.

    Recently I have made myself a customer of Yolo, the second neobank in Vietnam. Neobanks are banks that exist without physical branches. Customers open bank accounts, transfer money, setup savings and interact with the banks entirely via mobile app. The first neobank was Timo. Both Timo and Yolo were set up by Vpbank, a local commercial bank. To me, this is a much needed move to shake up the banking industry and potentially benefit customers in the long run. However, I think the direction Yolo is going is not a right one.

  • Train a face dectector using TensorFlow object detection API.

    About 3 years ago, putting together a face detection camera application for mobile devices was more involving a task. I remember a colleague sitting next to me back then tinkering with OpenCV and dlib to produce a demo with the right trade-off between size, speed and accuracy. As with every engineering problem, there is no one-size-fit-all solution. A on-device face detector may choose to reduce the size of input images to quicken detection, though lower resolution results in lower accuracy. Fast forward to the moment, it has never been as easier to customize your own face dection model thanks to folks at Google who open source their Tensorflow object dection api. Besides, platforms like Colab provide hobbists with free access to ML training-capable machines.

  • Building face landmark detection model using Pytorch

    Having used dlib for face landmark detection task, implementing my own neural network to achieve similar goal can be potentially fun and help the learning process. There is this recently released paper that outlines the approach of using machine learning in setting parameters used in traditional statistical models. The author is nice enough to release his source code, which can be a great starting point. So I forked from there, changed code to remove some bulky dependencies, and sort of re-writing it to better fit my mental model and in the process understand it better.

  • Allocate objects on memory buffer for performance gain

    I wrote about the cost of memory allocation in a recent post. Given a fixed amount of memory needed, reserve a large chunk in one go is cheaper than grabing smaller chunks one at a time. I did not realize that Cpp has the facility to take advantage of that until reading through the code of folly::IOBuf.

  • Echo server with libevent

    Network programming is one area where non-blocking IO can be used to achieve higher performance. A typical server needs to handle a few hundreds to a few thousands connections at a time. With the thread-pool based blocking model, when a new connection is established, a server’s thread serving that connection will trigger kernel system call to read data from socket file descriptor, be blocked until data are available. Thus, to handle say 200 connections concurrently, the sever needs to spawn 200 threads.

Subscribe via RSS