engineerick | 1 year ago | on: Show HN: A Simple Web Crawler with Spring, Postgres and Redis
engineerick's comments
engineerick | 1 year ago | on: Python dev considering Electron vs. Kivy for desktop app UI
Example: https://llm-examples.streamlit.app/
engineerick | 2 years ago | on: Show HN: LLMdantic: Structured Output Is All You Need
After initialize llmdantic you can get the prompt by running the following command:
""" from llmdantic import LLMdantic, LLMdanticConfig
from langchain_openai import ChatOpenAI
llm = ChatOpenAI()
config: LLMdanticConfig = LLMdanticConfig( objective="Summarize the text", inp_schema=SummarizeInput, out_schema=SummarizeOutput, retries=3, )
llmdantic = LLMdantic(llm=llm, config=config)
input_data: SummarizeInput = SummarizeInput( text="The quick brown fox jumps over the lazy dog." )
prompt: str = llmdantic.prompt(input_data) """
But here you need to provide a langchain llm model. If you do not want to use langchain llm model, you can use the following code:
""" from llmdantic.prompts.prompt_builder import LLMPromptBuilder
from llmdantic.output_parsers.output_parser import LLMOutputParser
output_parser: LLMOutputParser = LLMOutputParser(pydantic_object=SummarizeOutput)
prompt_builder = LLMPromptBuilder( objective="Summarize the text", inp_model=SummarizeInput, out_model=SummarizeOutput, parser=output_parser, )
data: SummarizeInput = SummarizeInput(text="Some text to summarize")
prompt = prompt_builder.build_template()
print(prompt.format(input=data.model_dump())) """
But here still we use langchain for the prompt building. If you any questions, feel free to ask I will be happy to help you.