engineerick's comments

engineerick | 1 year ago | on: Show HN: A Simple Web Crawler with Spring, Postgres and Redis

Thanks for the comment and suggestion on Postgres. I agree about the additional surface area of maintenance. However, the reason I used Redis was to become familiar with Redis in the Spring ecosystem. After writing in Python, Go, and Swift, I came back to Java and decided to become a strong Spring & Java developer. That's why I am writing Spring projects to solve different problems with different tech stacks. But now, because you mentioned it, I am interested in using Postgres as a message queue, and I will definitely try it in my future projects.

engineerick | 2 years ago | on: Show HN: LLMdantic: Structured Output Is All You Need

Hello, I'm glad you find it useful. I aimed to create something that would serve a purpose. If you can provide me details about use case you are trying to solve, I may add a feature to llmdantic to support it. Right now:

After initialize llmdantic you can get the prompt by running the following command:

""" from llmdantic import LLMdantic, LLMdanticConfig

from langchain_openai import ChatOpenAI

llm = ChatOpenAI()

config: LLMdanticConfig = LLMdanticConfig( objective="Summarize the text", inp_schema=SummarizeInput, out_schema=SummarizeOutput, retries=3, )

llmdantic = LLMdantic(llm=llm, config=config)

input_data: SummarizeInput = SummarizeInput( text="The quick brown fox jumps over the lazy dog." )

prompt: str = llmdantic.prompt(input_data) """

But here you need to provide a langchain llm model. If you do not want to use langchain llm model, you can use the following code:

""" from llmdantic.prompts.prompt_builder import LLMPromptBuilder

from llmdantic.output_parsers.output_parser import LLMOutputParser

output_parser: LLMOutputParser = LLMOutputParser(pydantic_object=SummarizeOutput)

prompt_builder = LLMPromptBuilder( objective="Summarize the text", inp_model=SummarizeInput, out_model=SummarizeOutput, parser=output_parser, )

data: SummarizeInput = SummarizeInput(text="Some text to summarize")

prompt = prompt_builder.build_template()

print(prompt.format(input=data.model_dump())) """

But here still we use langchain for the prompt building. If you any questions, feel free to ask I will be happy to help you.

page 1