http://ychacks.challengepost.com/submissions/25770-listening... - This was an audio parser that realtime searched what you were saying in a conversation. So if you said "Apple stock versus microsoft stock" it would search and result would be displayed.
> http://ychacks.challengepost.com/submissions/25770-listening.... - This was an audio parser that realtime searched what you were saying in a conversation. So if you said "Apple stock versus microsoft stock" it would search and result would be displayed.
I guess it really depends on your way of speaking, voice, tone, because for me this does not work very well. Too bad, the idea is great.
EDIT: so what? I get downvoted because it does not work as well for me as in the demo? Interesting.
One of the team members here, there's definitely a lot of work needed to improve the general speech recognition capabilities behind it. We also were mostly excited about the idea that over time as that technology improves this type of service automatically improves as well.
Not only is this an impressive application of data science in solving a real problem, I'm even more impressed that Tanay is still in high school and already building great hacks leveraging computer vision and machine learning. Truly awesome work that deserves the recognition.
I love the idea and the scrappy implementation, but how does anyone know that this works? What was it validated against? Should the title maybe be "Idea for cheap disease analysis via blood sample and iOS"
As an analytical chemist also with a degree in biochem who has worked extensively on blood analysis, this will not work at any meaningful rate of reliability. I sometimes get frustrated when the key part of some great breakthrough is hid behind a buzzword as if its a cure-all for the details. I would love to know how "machine learning" is going to just magically make this work at a reliable rate. I guess I am just a pessimistic lab rat.
Indeed it won't. Microscopy is hardly the bottleneck in hematology; and neither is cell counting, which is already carried on by automated analyzers for the most part anyway [1,2]. The problem is the need of differential staining: due to fundamental physical limits, no amount of machine learning can ever distinguish key hematocytes like lymphocytes from granulocytes in raw, unstained samples from microphotographs alone. So unless you use spectroscopy --and there's been some work done on that, eg [3,4]--, you need to spread, fix and stain your sample, each of which take a series of choreographed steps, reagents and considerable skill in controlled conditions to get (minimally) right [5] --hence the need for a lab.
So unless they attached a USB microspectrometer to the iPod, or streamlined the existing sample preparation process in a low-cost, fully-portable form; they are just solving the wrong problem.
The point that you miss is that microscopy is still a bottleneck in developing and rural areas. H&E staining is not that hard, but you are right that maintaining reagents and a clean lab is a logistical challenge.
You are absolutely right: the fact that microscopy is not sufficient doesn't mean it's not necessary in the first place. Besides, a portable microscope would definitely be a boon for other types of clinical work like urianalysis --as elij pointed out-- or plasmodia detection (for some species, at least). I stand corrected.
I was wondering about the need for staining. Is there potential that they could stain the blood sample and potentially get a result? I'm speaking relatively hypothetically.
This is at least more in the right direction than what I had seen previously.
I saw much worse a few months ago at a competition I was in. The winning team "created" a device (that looked like a USB key). They claimed that if you had a sore throat you could take a sample with a q-tip, insert into the device, and it would magically determine the presence of an infection. Those were their words. I was horrified and when I approached the organizers afterwards they didn't understand my explanation on why it was not possible. Indeed, after that time as I have spoken about it most people do not understand that it's not currently possible. Sci-fi blurs the realm of possibility for many and it seems reasonable to them. Back to the actual contest, mine was an "idea competition" and not a YC Hackathon.
Tanay's idea is leaps and bounds closer to the realm of possibility than the idea behind the other team I witnessed. For that, his age, and his other work on his startup clipped.me, I congratulate him and look forward to seeing him come up with something truly useful in the future.
Just for context, Oxford Nanopore have struggled a fair bit with getting this device to work. They have recently released quite promising prototype devices, but the project is many years behind schedule. Albeit that isn't surprising given what a massive shift in technology it is.
Oh, sure, but no one thinks they are violating the laws of physics anymore. It's "just" engineering now to get the accuracy and sustained-read lengths high enough.
Why is that not possible? A standard home pregnancy test is small, takes a few minutes, and is fairly reliable at detecting human chorionic gonadotropin (hCG). If you can find an antibody specific for a particular infection, you can make a similar test. It's mostly useless, but certainly possible.
If you're asking about the competition I was in, with the "miracle device," I forgot to mention that they gave multiple examples of what it could detect. Sore throat, ear infection, etc. The pregnancy test you mention is determined by a single use hormone detection. It's a chemical reaction dependent on one specific hormone being detected, not multiple hormones or multiple antibodies. For comparison, a urinalysis strip has multiple markings on it each for a different test. And they're not reusable. Whereas this just "magically" knew that you had Streptococcus pneumoniae or Haemophilus influenzae present--no demo, no explanation of how.
Yes, I think so. As others pointed out earlier, portable devices with immobilized reagents like pregnancy tests or glucose monitors already show the concept is (in principle) feasible; and there's been a lot of recent work dedicated to miniaturizing and integrating all steps of critical assays, including even the most elaborate ones [1]. It's pretty impressive stuff, so I expect a breakthrough anytime soon.
I definitely agree though it's a commendably well-thought-out project in itself, especially for the usual techno-bubbly standards of Silicon Valley hackatons of late.
In particular, do you question whether the quality of the image would be high enough, or whether the ML techniques can automate what a lab tech does while looking through a lens, or is the problem that seeing blood is not enough to diagnose much of anything with any certainty?
"Reliable rate" is relative, and something that I've found lacking in modern medical care in the US even when it's a dude in a lab coat looking at samples through a state of the art microscope...
my shallow knowledge of machine learning tells me that the idea behind it is that initially it will suck at diagnosis and analysis, but over time the algorithm will learn and improve to a point where eventually it becomes actually good at it and even exceeds human capability.
Machine learning, in this context supervised machine learning, is a useful tool for deriving unintuitive relationships between different parts of complex data sets. To do this, there must be some discernible correlation between the parameters of interest that isn't subsumed within the noise of the system+measuring device(s).
In this case, those parameters would be the image data and whatever health parameter is of interest (e.g. white blood cell count). My initial skepticism, perhaps that of the parent comments as well, has more to do with whether the measurements are of high enough quality for any reliable analysis to be done. The app doesn't seem to require any background or contextual data either (though I haven't verified this). If not, false positives and negatives could be problematic.
Anyway, machine learning isn't a form of magic that can transform data with no meaningful sensitivity to something into a something that is sensitive to it.
That's a dangerous way of thinking about ML. Models aren't magic, they're a approximate hacks that end up working for a specific instance of a problem.
More data is always nice, but typically you see accuracy level off (diminishing returns). ML is a constant process of improving your data, increasing the amount of available data (not the same as improving your data), improving your features, and improving your model. No one thing is sufficient.
I am not trying to take away from what they built, but I am genuinely curious what all was done prior to YC Hacks, since building the lens there seems like a challenge in itself.
Basically, I took a piece of rubber, poked a hole in it, and fit the lens in. It took a couple of hours to get the positioning right but after that it worked like a charm :)
It was described as a "cheap lens" and was very much electrically taped on. I assume it was tested prior to the hackathon though. The biggest accomplishment was this was a solo hacker. It was very well done for the time allotted.
From what I remember it was an iPhone with the flash light turned on face down on the table and then a toilet paper roll on top of it with a blood sample in one of those rectangular glass things on top of the toilet paper roll with another iPhone examining it. Please correct me if I'm wrong
one question: why is every single person in the YC Hacks photos using a variant of the macbook? i'm someone who is about to pursue a CS degree, and also embarking on an autodidactic journey in software development and i'm entirely used to windows machines and do not personally like the mac interface that much (possibly simply out of habit). why is it so prevalent for developers and programmers to use macs? are windows machines inferior for such purposes?
i'm curious though: as a layman in technical details but a fan of the tech world, i've always been under the impression that since windows was developed by a engineering-centric organization like MS, it should be far better for such technical uses, as opposed to OSX, developed by a design-centric company like Apple. am i completely wrong in that view?
If you're doing popular web-stack programming (python, ruby, node) you'll have a far better experience on a unix-based operating system since much of the tooling is designed with it in mind.
Windows has ports here and there, and there's always Cygwin, but it's going to feel like a compatibility layer the whole time you're using it (especially if you're following guides/stack overflow help).
I've used all three stacks for years and couldn't happier with my latest MBPr (hardware or software wise).
You will be a bit hosed pursuing a CS degree on Windows. People do it, you can make it work, but you'll be constantly fighting to get your system to do useful things.
I strongly preferred Windows/Linux machines for a long time until I gave in to Apple. Their machines are just superior from a hardware standpoint if cost is not highly relevant.
OSX took the intelligent step forward of basically reskinning BSD (not exactly but it's concise enough) and adopting the Intel platform on the hardware side was enough to bridge the gap and eventually overtake it in many tech circles.
That was the biggest thing I missed about Win7 when I switched over to OSX. However, there are several third-party apps that will give you a Win7-like window management experience. I use Spectacle.
As a non-mac user, the one thing that mac does have over windows is ready access to a unix terminal out of the box. That's about it.
If that's what you want though, just go with linux and skip the apple tax.
I've found there are two very different worlds of software development. For convenience, I'll label them the "Microsoft" world, and the "Open Source" world. (The labels aren't completely accurate, but I think they're largely descriptive.)
The Microsoft world runs .NET, Visual Studio, C#/VB.NET/ASP.NET, targets the Windows desktop runtime (tho increasingly also the web), relies primarily on proprietary (and usually non-gratis) libraries and tools, etc.
The Open Source world revolves around *NIX, uses open-source languages (gcc, clang, Java, V8 Javascript, MRI Ruby, etc), targets the Linux runtime and sometimes OS X/iOS, relies primarily on open-source (and gratis) libraries and tools, etc.
The ecosystem differences go pretty deep. For example, even though either world can interop with practically any SQL database, inhabitants of one will largely choose Microsoft SQL Server while inhabitants of the other will largely choose MySQL/PostgreSQL.
Both can have great software development or terrible software development. It's possible to mix and match (eg, using Windows doesn't preclude you from writing Ruby).
But startups tend to choose the Open Source world, likely due to the combination of lower licensing costs and the "hackability" of open-source software. I'd argue that due to those same reasons, the Open Source world has produced more innovation in the last decade.
Unless you stick with MS dev tools like Visual Studio and .NET, unfortunately, I'd have to say yes.
You can still use Windows for general software development, many people do, but most tutorials and tools are aimed at unix compatible systems. Also, when it comes time to deploy a server on AWS (or any cloud provider other than Azure), it'll help to know your way around a unix-like OS.
Also consider that for iOS development you need a Mac, and those seem more lucrative these days than Windows apps.
Windows machines ROCK for software development. Microsoft's tools are second to none. I say this as someone that's done mostly Linux and Mac dev for the last 6 years. I massively miss the MS toolset.
while i would love to believe you because as mentioned, i am partial to windows, why is it that you seem to be very isolated in your view? the other responses seem to support the idea that Macs are better for software development.
also, as a mildly interesting anecdote, when i was in india for a project, i noticed that indians generally all use windows machines for software development too. i rarely see apple machines there.
MS does have a nicely integrated set of developer tools, but they are not particularly useful for people who aren't developing for a Windows platform. Most consumer-facing Websites don't run windows, nor do most cell phone apps.
There are plenty of Windows jobs building corporate software (think intranet web apps and desktop GUI applications) and so there are indeed many devs who build that stuff and enjoy doing so on Windows. I imagine there is an overlap between the Indian devs and the Windows-related dev jobs.
Personally, I build OSS-based web applications on Linux. My last machine was a Mac and while I enjoyed it I chose Linux this time around because of the lower sticker price and the better native integration with OSS tools.
You're right. It really depends on what you are making. MS's tools only target MS platforms. So if you're not trying to make something for an MS platform, or specifically Windows, then they're out of the question.
Mac has the advantage it's a flavor of unix so if you're writing server software you're on a machine that's closer to your target environment. Similarly if you run Linux. But, the tools on both of those don't come close to MS's tools when actually writing the software. At least that's mine and many other people's experience.
As just one example, state of the art debugging on Linux is still command line GDB. Even many of my mac friends switch back to GDB to debug on Mac. Where as on Windows Visual Studio's debugger has always been amazing to work with for like 20 year? (19?) Maybe Valve+RadGameTools will fix this with their work on LLDB but it can't be too soon.
As for India I'd assume they don't want to pay the Apple tax. Best Buy lists 147 PC laptops for under $499. The cheapest Apple is $999. And that's just on brand stuff. Go to further off brand stuff and the prices get even cheaper. I'm not judging quality or specs. But people without money would seem unlikely to pick the $999
definitely not. I'm am a programmer and we are a windows shop that doesn't do any .net programming. Sometimes it's a little bit more difficult for some languages or frameworks (ie: django) but really there's not much you can't do.
It also depends on your location. From what I read/see American developers all have macbooks, but I'm currently attending a high tier Western European uni and there hardly any macs at all (only professors that came here from USA have them, lol). The CS program here uses Windows & Ubuntu.
that's actually my personal observation as well. i've traveled extensively around the world doing projects (some of my own previous startups), and in places like eastern europe and india, i've rarely seen macs at all. however, when i was in college in the US, i noticed that pretty much anyone who does anything remotely close to programming or graphics design uses a mac. definitely speaks to the marketing prowess of apple.
It depends on where in the world you are, and what sort of development you do. I rarely see regular business app developers with Macs where I am (South Africa). Indeed Macs are generally used by the wealthy here (weak exchange rate+middle-income country), or those who need to develop for iOS. Universities either use Linux or Windows in CS labs.
A huge percentage of the developer community worldwide uses Visual Studio, and most run it on Windows, rather than a Mac running a VM.
If you want to do iOS dev you're pretty much required to have a mac. Mac's also run a flavor of unix so they're closer to most web server environments than windows.
On the other hand, if you want to do graphics (GPU/Shaders) or games all of that is pretty much exclusively Windows based.
There's nothing wrong with Windows. It's a fine OS.
Windows is great for development and so are other platforms. People who claim that OSX is better are simply biased or drinking the typical apple fan boy kool aid. If anything, I would compare the development tools and in that sense Visual Studio is the best IDE out there!
My schooling was in Solaris, Windows and Unix. About 15-20 years ago open-source software was mostly available on nix variants and not as much on Windows. Most people didn't want to pay for software, so they chose nix... OSX happens to be a variant of FreeBSD (if I remember correctly), so it can run everything that runs in *nix. IMO, it was mainly used by graphics designers because of better hardware and some standard software that came with OS X. I think if you looked at Photoshop, you could do the same things regardless of the OS.
At home, I use Windows on a Macbook because I like the form factor of the Macbook and because Windows makes me more productive. I've done startups / developed under various platforms and there's not a single system where you're restricted....other than typical iOS stuff on the Mac and Windows Phone stuff on Windows. With new frameworks, you can develop for any mobile platform on any OS. I think OS X fans have probably based their opinion off of a handful of issues they've faced in Windows. Nobody seems to compare and contrast specific issues where Mac is a winner.
People that complain about Windows have probably not tried to install and configure remotely complex products on Linux like mail servers for instance. Their complaints are not usually first hand experiences. IIS on the other hand was a PITA to configure in Windows, but it really isn't as bad as people made it look.
I've coded in many languages/frameworks on Windows just fine:
- NodeJS / Server side JS
- Python
- C++
- C
- Java
- Ruby / RoR
- R
- Matlab
- Web based HTML5 / JS
- C#
- VB
The only time I was forced to use OS X was for developing iOS apps around in 2007.
As far as your CS degree goes, I would start out with learning typical core computing concepts:
- Algorithms
- Data structures
- Processor Architecture
- Compilers
- Memory Management
- Etc.
In addition, I would keep a few different kinds of OS's handy and set up the same stuff across all of them to see how it's done and what the issues are. Along the way, keep blogging about your experience so you can always refer to it. This helped me quite a lot in picking the right platforms before venturing into a new project.
And btw, I write software in Windows that runs on all OS's and cloud services (although we're sticking with Azure for now).
This sounds like something that is done in every American hospital - the "automated diff." In a blood count, the cells are examined and determined to be red cells, white cells, or platelets. Within each category of cell there are sub-types (neutrophils, lymphocytes, monocytes, eosinophils, basophils) and sub-sub-types (bands, metamyelocytes, etc). Basically, you can automatically identify members of various cellular lineages.
Cool that they got this to work through an iPhone camera.
Cool! This definitely deserved to win, I think. At the least it's better than the other YC Hacks thing I looked at, which had XSS holes! https://news.ycombinator.com/item?id=8130482
In the article's slideshow there are a bunch of interesting pictures. The very last picture looks to be a box full of Lego blocks? What did that component of the solution do? :)
Glad you mentioned it! Originally I was using that as a mount for the microscope, but unfortunately the lights placed in it weren't strong enough. The iPhone camera flash turned out to work best.
It was noisy at times and the chairs were pretty flimsy but that's part of the fun of hackathons. It's a test of a lot of things like time management, concentration, will power, teamwork, communication, etc. Afterawhile you get so focused on what you're doing you don't notice how big it is.
Some teams were in the old YC building as well. I have to say they did a pretty good job organizing. Wifi was perfect, power was everywhere, food was awesome and plentiful, events happened on time.
3rd place: http://ychacks.challengepost.com/submissions/25746-vrniture
My favorites:
http://ychacks.challengepost.com/submissions/25722-savant - basically text searchable system playback. Think Timemachine meets quicktime recorder.
http://ychacks.challengepost.com/submissions/25720-gezi-web-... - Tabless browser. It was an interesting concept using the history as the search/navigation display. He struggled to sell during the pitch.
http://ychacks.challengepost.com/submissions/25770-listening... - This was an audio parser that realtime searched what you were saying in a conversation. So if you said "Apple stock versus microsoft stock" it would search and result would be displayed.