Scrum Master
, one Product Owner
, and Developers
.The team should have all the skills needed to create the product. Having to rely on others outside of the team means dependencies on people outside of the team. This will result in delays. A knowledg
Type | Pros | Cons |
---|---|---|
Native Web Tool | 1.Widely used and supported by a large community of developers. 2.Highly customizable and flexible, allowing for a wide range of solutions to be developed. 3.Can be integrated with a variety of databases and APIs. |
1.Steep learning curve, as JavaScript can be complex and requires a good understanding of programming concepts. 2.Can be difficult to maintain and debug, especially for large and complex applications. 3.Can be slow in older browsers and less optimized devices. |
Power BI | 1.User-friendly interface, making it easy for non-technical users to create and customize reports. 2.Offers a range of built-in visualization options and tools for data analysis. 3.Integrates well with other Microsoft/Azure tools, such as Excel and SharePoint. |
1.Limited customization options compared to JavaScript. 2.Can be less flexible and may not be suitable for more complex reporting requirements. 3. |
S3 Select is focused on retrieving data from S3 using SQL:
S3 Select, enables applications to retrieve only a subset of data from an object by using simple SQL expressions. By using S3 Select to retrieve only the data needed by your application, you can achieve drastic performance increases – in many cases you can get as much as a 400% improvement compared with classic S3
test.csv
key,a,b,c
a,1,,-1
a,2,,
a,3,,4
test.py
from pyspark.sql import SparkSession
from pyspark.sql import functions as F
spark = SparkSession \
.builder \
.appName("spark-app") \
.getOrCreate()
spark.sparkContext.setLogLevel("WARN")
df = spark.read.csv("test.csv", header=True)
res = df.groupBy(["key"]).agg(*[
F.max("a"),
F.max("b"),
F.max("c"),
F.min("a"),
F.min("b"),
F.min("c"),
])
print (res.toPandas())
spark-submit test.py
key max(a) max(b) max(c) min(a) min(b) min(c)
0 a 3 None 4 1 None -1
$ export RELEASE=$(curl -s https://api.github.com/repos/kubeless/kubeless/releases/latest | grep tag_name | cut -d '"' -f 4)
$ kubectl create ns kubeless
$ kubectl create -f https://github.com/kubeless/kubeless/releases/download/$RELEASE/kubeless-$RELEASE.yaml
$ kubectl get pods -n kubeless
$ kubectl get deployment -n kubeless
$ kubectl get customresourcedefinition
def hello(event, context):
print event
return event['data']
$ kubeless function deploy hello --runtime python2.7 \
--from-file test.py \
--handler test.hello
$ kubectl get functions
$ kubeless function ls
$ kubeless function call hello --data 'Hello world!'
create a file
echo This is a sample text file > sample.txt
delete a file
del file_name
move a file
move stats.doc c:\statistics
combine files
copy /b file1 + file2 file3
import pandas as pd
import pyodbc
import sqlalchemy
import urllib
def get_sqlalchemy_engine(driver, server, uid, pwd, database):
conn_str = 'DRIVER={};SERVER={};UID={};PWD={};DATABASE={}'.format(driver, server, uid, pwd, database)
quoted = urllib.parse.quote_plus(conn_str)
engine = sqlalchemy.create_engine('mssql+pyodbc:///?odbc_connect={}'.format(quoted))
return engine
if __name__ == '__main__':
# create engine
driver = 'ODBC Driver 17 for SQL Server'
server = 'xxx'
uid = 'xxx'
pwd = 'xxx'
database = 'xxx'
engine = get_sqlalchemy_engine(driver, server, uid, pwd, database)
# read excel
file_path = 'xxx'
df = pd.read_excel(file_path)
# load into SQL Server
schema_name = 'xxx'
table_name = 'xxx'
df.to_sql(table_name, schema=schema_name, con=engine, index=False, if_exists='replace')
Create a network named "test"
docker network create test
Create two containers using the network
docker run --name c1 --network "test" --rm --entrypoint tail mongo -f
docker run --name c2 --network "test" --rm --entrypoint tail mongo -f
Enter one container to ping the other and it will work
docker exec -it c1 bash
apt-get install iputils-ping # install command ping
root@79568c5ce391:/usr/src/app# ping c2
PING c2 (172.18.0.3) 56(84) bytes of data.
64 bytes from c2.test (172.18.0.3): icmp_seq=1 ttl=64 time=0.137 ms
64 bytes from c2.test (172.18.0.3): icmp_seq=2 ttl=64 time=0.221 ms
64 bytes from c2.test (172.18.0.3): icmp_seq=3 ttl=64 time=0.232 ms
...
Using default network or "bridge" network does not work:
docker run --name c1 --rm --entrypoint tail web_scraper:v1 -f
docker run --name c2 --rm --entrypoint tail web_scraper:v1 -f
docker run --name c1 --network "bridge" --rm --entrypoint tail web_scraper:v1
$ docker run -p 127.0.0.1:80:8080/tcp ubuntu bash
This binds port 8080 of the container to TCP port 80 on 127.0.0.1 of the host machine. You can also specify udp and sctp ports.
$ docker run --expose 80 ubuntu bash
This exposes port 80 of the container without publishing the port to the host system’s interfaces.
docker images
docker build -t image_name .
docker rmi $(docker images | grep "^<none>" | awk "{print $3}") # remove all untagged images
docker save image_name > image_name.tar # save image as a tar file
docker load < busybox.tar.gz # load image
docker run -p 27017:27017 -v mongodbdata:/data/db mongo
docker ps -a
docker exec -it ubuntu_bash bash
docker rm container_name
docker rm $(docker ps -a -q) # remove all stopped containers
docker volume create mongodbdata
docker volume ls
docker volume inspect mongodbdata
docker network ls
docker network create network_name
docker network inspect network_name
docker network rm network_name
docker login azure --tenant-id 8432da0f-f8af-4b02-b318-4c777cfab498
docker context create aci hpjacicontext
docker context use hpjacicontext
docker compose up
docker-compose up --detach --force-recreate
Sample Code: https://github.com/guangningyu/react-skeleton