my company is going full steam ahead with AI stuff and a coworker (who is lebanese and we talk about palestine but he has jewish cabal conspiracy ) loves the promise (fantasy?) of AI, especially GenAI. This mfer uses it to summarize short articles and write his emails. I feel like I'm a crazy person because I enjoy reading stuff and writing too.
He sent me a demo yesterday where they had a local instance of an LLM trained on internal data and sure enough it was able to pull info from disparate sources and it was legit kinda neat. Most of what it did was chatbot stuff but with NLP and NLG. To me, this seems like really complicated way of having a search algorithm which we know to be more efficient and faster especially since it was just fetching info.
However it was only neat bc it was running on internal data with strict boundaries, also it belies that a massive, comprehensive data dictionary had to be made and populated by people to allow for these terms/attributes/dimensions to be linked together. One of the things it did in the demo was execute SQL based off of a question how many of these items onthis date? which it then provided as selectsum(amount) fromtablewhere report_date =date and it also provided graphs to show fluctuations in that data over time. I didn't validate the results but I would hope it wouldn't make stuff up especially since the training set was only internal. My experience with other AI apps is that you can ask the thing the same question and you'll get different results.
Jfc. Like who do you blame here? The model for being stupid, the prompter for not validating and if they’re validating then are there any time savings?
my company is going full steam ahead with AI stuff and a coworker (who is lebanese and we talk about palestine but he has jewish cabal conspiracy ) loves the promise (fantasy?) of AI, especially GenAI. This mfer uses it to summarize short articles and write his emails. I feel like I'm a crazy person because I enjoy reading stuff and writing too.
He sent me a demo yesterday where they had a local instance of an LLM trained on internal data and sure enough it was able to pull info from disparate sources and it was legit kinda neat. Most of what it did was chatbot stuff but with NLP and NLG. To me, this seems like really complicated way of having a search algorithm which we know to be more efficient and faster especially since it was just fetching info.
However it was only neat bc it was running on internal data with strict boundaries, also it belies that a massive, comprehensive data dictionary had to be made and populated by people to allow for these terms/attributes/dimensions to be linked together. One of the things it did in the demo was execute SQL based off of a question
how many of these items on this date?
which it then provided asselect sum(amount) from table where report_date = date
and it also provided graphs to show fluctuations in that data over time. I didn't validate the results but I would hope it wouldn't make stuff up especially since the training set was only internal. My experience with other AI apps is that you can ask the thing the same question and you'll get different results.deleted by creator
Jfc. Like who do you blame here? The model for being stupid, the prompter for not validating and if they’re validating then are there any time savings?