(no title)
a_zaydak | 2 years ago
I think the worry about GPT-4 making things up is valid. We all know what happened to they lawyer who used GPT. But I think this comes with training. Users need to be trained to use it as tool and to verify the outputs. Now will everyone do this? No. There are lazy and incompetent people in every large organization and the govt is no different.
The concern about a LLM influencing or biasing its output is also a worry. Maybe there isn't a good solution to this one. I would say that having a govt group testing and assessing it would be best however they don't have the expertise form such a group which is the whole reason why they are leaning on companies like MS to guide their AI usage in the first place. I could also argue that this may not matter much anyway since the govt decisions are already heavily influenced by lobbyist and illegal promotion / favoritism of contractors.
No comments yet.