mirror of
https://github.com/OMGeeky/gpt-pilot.git
synced 2026-02-23 15:49:50 +01:00
Updated prompts
This commit is contained in:
@@ -1,5 +1,7 @@
|
||||
{% if debugging_try_num == 0 %}
|
||||
Ok, we need to debug this issue so we can execute `{{ command }}` successfully. In case you cannot debug this by running any command and need a human assistance, respond with `NEED_HUMAN`. Write a step by step explanation of what needs to be done that will debug this issue
|
||||
{% else %}
|
||||
I've tried all you suggested but it's still not working. Can you suggest other things I can try to debug this issue?
|
||||
{% endif %}
|
||||
Ok, we need to debug this issue so we can execute `{{ command }}` successfully. I want you to debug this issue by yourself and I will give you 2 functions that you can use - `run_command` and `implement_code_changes`.
|
||||
|
||||
`run_command` function will run a command on the machine and will return the CLI output to you so you can see what to do next.
|
||||
|
||||
`implement_code_changes` function will change the code where you just need to thoroughly describe what needs to be implmemented, I will implement the requested changes and let you know.
|
||||
|
||||
After this, you need to deside what to do next. You can rerun the command `{{ command }}` to check if the problem is fixed or run another command with `run_command` or change more code with `implement_code_changes`.
|
||||
@@ -1,9 +1,7 @@
|
||||
{% if step_index != 0 %}
|
||||
So far, steps {{ finished_steps }} are finished so let's do
|
||||
{% else %}
|
||||
Let's start with the
|
||||
{% endif %}
|
||||
step #{{ step_index }}:
|
||||
Let's start with the{% endif %} step #{{ step_index }}:
|
||||
```
|
||||
{{ step_description }}
|
||||
```
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
Now, we need to verify if this change was successfully implemented. We can do that in 3 ways:
|
||||
1. By writing an automated test or by running a previously written test - this is the preferred way since we can then test if this functionality works in the future. You write automated tests in Jest and you always try finding a way to test a functionality with an automated test. Even if changes seem visual or UI-based, try to find a way to validate them using an automated test, such as verifying HTTP responses or elements rendered on the page. If you choose this type of test, make sure that you describe it in as much details as needed so that when someone looks at this test can know exactly what needs to be done to implement this automated test.
|
||||
1. By writing an automated test or by running a previously written test - you write automated tests in Jest and you always try finding a way to test a functionality with an automated test. Even if changes seem visual or UI-based, try to find a way to validate them using an automated test, such as verifying HTTP responses or elements rendered on the page. If you choose this type of test, make sure that you describe it in as much details as needed so that when someone looks at this test can know exactly what needs to be done to implement this automated test.
|
||||
|
||||
2. By running a command (or multiple commands) - this is good for when an automated test is an overkill. For example, if we installed a new package or changed some configuration. Keep in mind that in this case, there shouldn't be any human intervention needed - I will run the commands you will give me and show you the CLI output and from that, you should be able to determine if the test passed or failed.
|
||||
|
||||
|
||||
Reference in New Issue
Block a user