Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The fix isn't the hard part. It's the deployment and validation that can take time.

Pretty impressive.



Yep, clearly shows the value of a properly configured CI/CD pipeline.


Yeah here it is:

git pull; sh tests; rsync /prod/ all@prod:/var/www/

^ That is copyrighted by the way. Ill take a consultant fee. I know - I know it should be thousands of lines of puppet, jenkins, hooks, Kubernetes, Salt, and 2 million lines of python and ELM all piped through Docker containers -- I am NOT an animal.


Enterprise edition with test validiation and continuous deployment:

while true; do git pull; sh tests && rsync /prod/ all@prod:/var/www/;done


You forgot to rewrite the logic in xml, and then fetch it over the internet from unknown third parties by tunneling it through json, then http. Bonus points if the the whole thing is deployed via docker hub.


!!!

those ... those semicolons should be &&


You need to invoke the script via ‘sh -ex’ for proper exception handling + debuggability.

Also, mktemp and shell exit traps are your friends.


Except that this is Facebook, so `sh tests` is going to take 900 cpu-hours


Git pull - you have a staging server!?


I'd more impressed in some other context since a willingness to skimp on validation and "red tape" is how a bug like this ends up in production in the first place.


What validation? I'd assume for this one they'd take the "move fast and break things" approach.


The deployment and the validation should be trivial if the fix is trivial.

The difficult part is having someone who reads the report and escalates it, preferably in a timely manner.


The world is littered with the smoking, segfaulted, hulks of programs that were quickly deployed after an obvious fix.


> The deployment and the validation should be trivial if the fix is trivial.

Trivial. That's what Oculus said.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: