Replit is an IDE startup developing an AI-powered tool that generates code called Ghostwriter. It raised nearly $100 million in funding this week. Andreessen Horowitz led the round, an extension of its Series B round, with participation from Khosla Ventures, Coatue, SV Angel, Y Combinator, Bloomberg Beta, Naval Ravikant, ARK Ventures and Hamilton Helmer.
Also Read: Metatime Receives $11 Million Investment
Replit Will Expand Its Services
Amjad Masad, founder and CEO of Replit, commented. "We are relentless in our mission to empower one billion software developers," he said. He then added that the new funds, which bring Replit's total to over $200 million, will be used to further develop the core. This includes product experience, expanding Replit's cloud services and "driving innovation" in AI. Masad said, "AI has already brought this future closer. We look forward to expanding our offerings for professional developers."
Replit is based in San Francisco. It was co-founded in 2016 by programmers Amjad Masad and Faris Masad and designer Haya Odeh. Before creating Replit, Amjad Masad worked in engineering roles at Yahoo and Facebook, where he developed software development tools.
So what does Replit do? Replit offers an online, collaborative IDE that supports various programming languages, including JavaScript, Python, Go and C++. Users can share a workspace with one or more other users. They can also see real-time edits between files, send messages to each other and debug code together. Furthermore, users can share projects, ask for help, learn from tutorials and use templates.
Ghostwriter seems to be the driving force behind Replit's recent rapid growth, leading to a partnership with Google Cloud and a user base that dwarfs 22 million developers. However, like all generative AI tools, it comes with risks and potentially legal implications that have yet to be fully enforced in the courts.
It is unclear whether Ghostwriter was trained in licensed or copyrighted code. However, Replit notes that Ghostwriter's proposed code may contain "inaccurate, offensive or otherwise inappropriate" strings. This includes insecure code. According to a recent Stanford study, software engineers who use AI systems that generate code are more likely to cause security vulnerabilities in the applications they develop.
No comments yet for this news, be the first one!...