pre-positioning
- A domain name
- One server
- A public number.
Domain Configuration
Create a new second-level domain name at your domain name service provider and bind the server hosting IP.
Server Configuration
Upload the following python file to your server and change the code in the snippet to the appropriate location (token, api-key, port)
import time from flask import Flask,make_response,request import openai from flask import Flask, request from flask_caching import Cache import as ET import hashlib import requests import re import os cnt = 0 my_wx_token = "" # Customized letter and number combinations are sufficient, and subsequently need to be filled in the public backend my_gpt_key = "" # Fill in the API-KEY you created in the OpenAI backend here. my_switch_chatgpt = True app = Flask(__name__) env_dist = cache = Cache(app, config={'CACHE_TYPE': 'simple', "CACHE_DEFAULT_TIMEOUT": 30}) @('/',methods=['GET','POST']) def wechat(): if == 'GET': signature = ("signature", "") timestamp= ("timestamp", "") nonce= ("nonce", "") echostr= ("echostr", "") print(signature, timestamp, nonce, echostr) token=my_wx_token data =[token, timestamp, nonce] () temp = ''.join(data) sha1 = hashlib.sha1(('utf-8')) hashcode=() print(hashcode) if hashcode == signature: print("wechat commit check OK") return echostr else: print("GET error input msg") return "error-return\r\n" else: xmlData = (()) msg_type = ('MsgType').text if msg_type == 'text': ToUserName = ('ToUserName').text FromUserName = ('FromUserName').text CreateTime = ('CreateTime').text print(ToUserName) print(FromUserName) print(CreateTime) global cnt cnt += 1 print('-------> ' + str(cnt)) return generate_response_xml(FromUserName, ToUserName, ('Content').text) def text_reply(FromUserName, ToUserName, output_content): reply = ''' <xml> <ToUserName><![CDATA[%s]]></ToUserName> <FromUserName><![CDATA[%s]]></FromUserName> <CreateTime>%s</CreateTime> <MsgType><![CDATA[text]]></MsgType> <Content><![CDATA[%s]]></Content> </xml> ''' response = make_response(reply % (FromUserName, ToUserName, str(int(())), output_content)) response.content_type = 'application/xml' return response def generate_response_xml(FromUserName, ToUserName, input_content): output_content = generate_response(input_content) return text_reply(FromUserName, ToUserName, output_content) outofsevice_txt = "pardon me,<a href=\"/s/0LN37YiERJgMyvIDpzRcAQ\">Jason the Tackle Lion'sChatGPTService Assistant</a>Under maintenance,The duration of the maintenance cannot be predicted at this time,Please try again tomorrow.。" @(timeout=60) def generate_response(prompt): if not my_switch_chatgpt: return outofsevice_txt openai.api_key = my_gpt_key response = ( model="text-davinci-003", prompt=prompt, temperature=0, max_tokens=1024, top_p=1, frequency_penalty=0.0, presence_penalty=0.0, ) message = [0].text print(message) ans = () return ans if __name__ == '__main__': (host='0.0.0.0', port=xxxx, debug=True)# Open port xxxx
Using Pagoda is a quicker way to configure it. After installing the Pagoda panel, go to the software store and install the following two plug-ins
Open the python project manager and briefly configure the project we want to launch
After startup, map the project domain name, both top-level and second-level domains are fine, for example, I fill in the following here
Public Configuration
Go to the public backstage, find the settings and development, go to the basic configuration, as I have already configured here, here is only a demonstration of how to add the enabled
Click Add Configuration
The token value is the value entered in the above snippet, customized with a combination of letters and numbers.
After clicking submit, if the project in the server starts without error, it will indicate that the token validation was successful
Then you can come back to chatGPT and enjoy the conversation!
To this article on the public access chatGPT tutorial with Python source code of the article is introduced to this, more related to the public access chatGPT content please search for my previous articles or continue to browse the following related articles I hope that you will support me more in the future!